[mpich-discuss] MPI_Bsend
James Dinan
dinan at mcs.anl.gov
Thu Nov 10 14:27:29 CST 2011
I think you want:
MPI_Buffer_attach(malloc(bufsize), bufsize - MPI_BSEND_OVERHEAD);
Best,
~Jim.
On 11/10/11 2:17 PM, Wei-keng Liao wrote:
> Thanks, Dave and Paven,
>
> Is this MPI_BSEND_OVERHEAD additional size per send call?
>
> I add that to the buffer size, but still got the same error.
> Since I am using MPI_BYTE, I guess I don't need to call MPI_Pack_size().
>
> bufsize = 1024*4;
> bufsize += MPI_BSEND_OVERHEAD;
> MPI_Buffer_attach(malloc(bufsize), bufsize);
>
>
>
> Wei-keng
>
>
> On Nov 10, 2011, at 1:37 PM, Dave Goodell wrote:
>
>> You need to add MPI_BSEND_OVERHEAD to your buffer size. This is an upper bound on the amount of space that the MPI implementation will internally use from your buffer. So if you want to guarantee that X bytes will be successfully buffered by the library then you need to attach a buffer of size X+MPI_BSEND_OVERHEAD. Officially you are supposed to use MPI_Pack_size too, although I'm not sure if that's required in practical implementations.
>>
>> See MPI-2.2, pages 47-48.
>>
>> -Dave
>>
>> On Nov 10, 2011, at 1:30 PM CST, Wei-keng Liao wrote:
>>
>>> My program using MPI_Bsend failed due to insufficient space,
>>> even though it did allocate exact size required, but not one byte more.
>>> I wonder if this is an mpich bug, or MPI requires more space than needed.
>>> If it is later, how much more is needed?
>>>
>>> Wei-keng
>>>
>>> error message:
>>> $ mpiexec -machinefile=machinefile -l -n 4 a.out
>>> [2] Fatal error in MPI_Bsend: Invalid buffer pointer, error stack:
>>> [2] MPI_Bsend(182).......: MPI_Bsend(buf=0x7fff81048a50, count=1024, MPI_BYTE, dest=0, tag=2, MPI_COMM_WORLD) failed
>>> [2] MPIR_Bsend_isend(318): Insufficient space in Bsend buffer; requested 1024; total buffer size is 4096
>>>
>>> ---- bsend.c -----------------------------
>>> #include<stdio.h>
>>> #include<stdlib.h>
>>> #include<mpi.h>
>>>
>>> int main(int argc, char **argv)
>>> {
>>> int i, rank, nprocs, src, bufsize;
>>> char buf[1024];
>>> void *bsend_buf = NULL;
>>> MPI_Status status;
>>>
>>> MPI_Init(&argc,&argv);
>>> MPI_Comm_rank(MPI_COMM_WORLD,&rank);
>>> MPI_Comm_size(MPI_COMM_WORLD,&nprocs);
>>>
>>> bufsize = 1024*4;
>>> // bufsize += 512; // if uncommented, the program runs successfully
>>> MPI_Buffer_attach(malloc(bufsize), bufsize);
>>>
>>> for (i=0; i<4; i++) {
>>> if (rank> 0)
>>> MPI_Bsend(buf, 1024, MPI_BYTE, 0, rank, MPI_COMM_WORLD);
>>> else {
>>> for (src=1; src<nprocs; src++)
>>> MPI_Recv(buf, 1024, MPI_BYTE, src, src, MPI_COMM_WORLD,&status);
>>> }
>>> }
>>>
>>> MPI_Buffer_detach(&bsend_buf,&bufsize);
>>> free(bsend_buf);
>>> MPI_Finalize();
>>> return 0;
>>> }
>>>
>>>
>>>
>>> _______________________________________________
>>> mpich-discuss mailing list mpich-discuss at mcs.anl.gov
>>> To manage subscription options or unsubscribe:
>>> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
>>
>> _______________________________________________
>> mpich-discuss mailing list mpich-discuss at mcs.anl.gov
>> To manage subscription options or unsubscribe:
>> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
>>
>
> _______________________________________________
> mpich-discuss mailing list mpich-discuss at mcs.anl.gov
> To manage subscription options or unsubscribe:
> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
More information about the mpich-discuss
mailing list