[mpich-discuss] MPI_Send/Recv vs MPI_un/pack performance
Pavan Balaji
balaji at mcs.anl.gov
Thu Jul 31 13:08:20 CDT 2008
Send/Recv do internal pipelining while packing large non-contiguous
messages. Other than that, it should be the same.
(Sent from my iPhone)
--
Pavan Balaji
http://www.mcs.anl.gov
On Jul 31, 2008, at 1:45 PM, Rob Ross <rross at mcs.anl.gov> wrote:
> Hi Roberto,
>
> We could make the MPI_Send()/MPI_Recv() of non-packed data
> slower :)...
>
> Our MPI_Send() and MPI_Recv() use the same techniques to process
> datatypes as our MPI_Pack() and MPI_Unpack(), so they process types
> very efficiently, and they can avoid copying data from your buffer
> in some cases (such as by pushing data directly into the socket
> buffer, and receiving directly from socket buffer into correct
> memory locations). Basically you're forcing a memory copy in the
> path, making things slower.
>
> Perhaps one of the socket channel implementors can provide further
> details.
>
> Regards,
>
> Rob
>
> On Jul 31, 2008, at 12:18 PM, Roberto Fichera wrote:
>
>> Hi All on the list,
>>
>> My apologies if this argument was already well discussed many times
>> before in the list, but playing around the
>> MPI_Send() and MPI_Recv() I end up on trying to optimize both
>> transfert and receive parts for our
>> serialization/deserialization functions. So looking around I decide
>> to use the MPI_Pack() and MPI_Unpack() into
>> an already allocated buffer and send/receive it in one "big shot".
>> After getting it working I finally created a test program
>> for profiling the performances in both cases. My results was quite
>> stunning, working with various sizes in Mb of serialized
>> data, the MPI_Send() is ~2 to 4 times faster than paired MPI_Pack()
>> & MPI_Send()!!! So, my understanding for the
>> MPI_Pack() is that it can be usefull for packing and unpacking data
>> to send/receive for performance reasons, maybe
>> do I'm missing something (o.s. tuning)?
>>
>> I'm using mpich2 v1.0.7 with channel ch3:socket on Fedora 8 x86_64.
>>
>> Best regards,
>> Roberto Fichera.
>>
>> <char.png><double.png><int.png>
>
More information about the mpich-discuss
mailing list