[mpich-discuss] How to change system buffer size for MPI_Send and MPI_Recv?
Pavan Balaji
balaji at mcs.anl.gov
Fri Apr 23 21:07:55 CDT 2010
You can try playing with the MPID_NEM_NUM_CELLS and MPID_NEM_CELL_LEN
values in src/mpid/ch3/channels/nemesis/nemesis/include/mpid_nem_datatypes.h
Personally, I'll be very surprised if that makes much of a difference
for well designed applications. But if it does, please let us know :-).
-- Pavan
On 04/23/2010 07:09 PM, Hanjun Kim wrote:
> Hi Pavan,
>
> Thank you for your reply.
> I am working on parallelization techniques with MPI, so various
> programs (especially SPEC benchmarks) are my candidates. It will be
> executed within one node for now, so it is OK to modify the MPICH2
> code. I think it will be helpful for me to understand the system
> buffer size effect on my technique. Which parts of the code should I
> modify?
>
> Thank you.
>
> Best,
> Hanjun
>
> On Fri, Apr 23, 2010 at 6:51 PM, Pavan Balaji <balaji at mcs.anl.gov> wrote:
>> Is all communication within the same node? There's no easy way to increase
>> this size without modifying the MPICH2 code. Also, that's not really a good
>> solution, since it might break on some other network.
>>
>> What exactly is your application trying to do? It sounds like it might be
>> better for you to prepost the receive (MPI_Irecv) and enable asynchronous
>> progress for data to be sent out on demand. In MPICH2-1.2.1p1, there's an
>> experimental version available, if you'd like to use it (configure option
>> --enable-async-process). In the MPICH2 1.3.x series, it is being compiled in
>> by default, and is switched on/off using an environment variable.
>>
>> -- Pavan
>>
>> On 04/23/2010 04:46 PM, Hanjun Kim wrote:
>>> Hi,
>>>
>>> I installed mpich2 on a 24-core SMP machine(64bit ubuntu), and
>>> parallelized several programs with MPI. In general, the programs
>>> performed well. However, when large amount of data were sent through
>>> MPI_Send and Recv, the sender became blocked until the receiver
>>> received the data, and its performance became degraded. Is there some
>>> preset system buffer size for MPI_Send and MPI_Recv? If so, how can I
>>> increase the system buffer size? I believe if the system buffer size
>>> of MPI_Send is increased, it will be helpful to get better
>>> performance.
>>>
>>> Thank you in advance.
>>>
>>> Best,
>>> Hanjun
>>> _______________________________________________
>>> mpich-discuss mailing list
>>> mpich-discuss at mcs.anl.gov
>>> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
>> --
>> Pavan Balaji
>> http://www.mcs.anl.gov/~balaji
>> _______________________________________________
>> mpich-discuss mailing list
>> mpich-discuss at mcs.anl.gov
>> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
>>
> _______________________________________________
> mpich-discuss mailing list
> mpich-discuss at mcs.anl.gov
> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
--
Pavan Balaji
http://www.mcs.anl.gov/~balaji
More information about the mpich-discuss
mailing list