The specifics of this test come from an MPI excerciser that gathered (using MPIR_Gather) a variety of types, including MPI_SHORT_INT. The way that gather is implemented, it created and then sent a struct datatype of the tmp-data from the software tree and the local-data. I pulled out the important bits, and got this test-case. It asserts on PPC32 Linux 1.1 and BGP 1.1rc0, but runs fine on 1.0.7. The addresses/displacements are fake, but were originally based on the actual values used inside MPIR_Gather. It does the type-create on the first two types just to show that it doesn't always fail.<div>
<br></div><div><br></div><div>Error message:</div><div><br></div><blockquote class="webkit-indent-blockquote" style="margin: 0 0 0 40px; border: none; padding: 0px;">Creating addr=[0x1,0x2] types=[8c000003,4c00010d] struct_displs=[1,2] blocks=[256,256] MPI_BOTTOM=(nil)<br>
foo:25<br>Assertion failed in file segment_ops.c at line 994: *lengthp > 0<br>internal ABORT - process 0</blockquote><div><div><br></div><div><br></div><div>Code</div><div><br></div></div><blockquote class="webkit-indent-blockquote" style="margin: 0 0 0 40px; border: none; padding: 0px;">
#include <stdio.h><br>#include <stdlib.h><br>#include <unistd.h><br>#include <mpi.h><br><br>void foo(void *sendbuf,<br> MPI_Datatype sendtype,<br> void *recvbuf,<br> MPI_Datatype recvtype)<br>
{<br> int blocks[2];<br> MPI_Aint struct_displs[2];<br> MPI_Datatype types[2], tmp_type;<br><br> blocks[0] = 256;<br> struct_displs[0] = (size_t)sendbuf;<br> types[0] = sendtype;<br> blocks[1] = 256;<br> struct_displs[1] = (size_t)recvbuf;<br>
types[1] = MPI_BYTE;<br><br> printf("Creating addr=[%p,%p] types=[%x,%x] struct_displs=[%x,%x] blocks=[%d,%d] MPI_BOTTOM=%p\n",<br> sendbuf, recvbuf, types[0], types[1], struct_displs[0], struct_displs[1], blocks[0], blocks[1], MPI_BOTTOM);<br>
MPI_Type_create_struct(2, blocks, struct_displs, types, &tmp_type);<br> printf("%s:%d\n", __func__, __LINE__);<br> MPI_Type_commit(&tmp_type);<br> printf("%s:%d\n", __func__, __LINE__);<br>
MPI_Type_free (&tmp_type);<br> puts("Done");<br>}<br><br><br>int main()<br>{<br> MPI_Init(NULL, NULL);<br><br> foo((void*)0x1,<br> MPI_FLOAT_INT,<br> (void*)0x2,<br> MPI_BYTE);<br> sleep(1);<br>
foo((void*)0x1,<br> MPI_DOUBLE_INT,<br> (void*)0x2,<br> MPI_BYTE);<br> sleep(1);<br> foo((void*)0x1,<br> MPI_SHORT_INT,<br> (void*)0x2,<br> MPI_BYTE);<br><br> MPI_Finalize();<br> return 0;<br>
}</blockquote><div><div><div><br></div></div><div><br></div><div><br></div><div>I don't know anything about how this might be fixed, but we are looking into it as well.</div><div><br></div><div>Thanks,</div><div>Joe Ratterman</div>
<div><a href="mailto:jratt@us.ibm.com">jratt@us.ibm.com</a></div></div>