[mpich-discuss] messages queue
Jarosław Bułat
kwant at agh.edu.pl
Mon Jul 14 15:11:41 CDT 2008
Hi!
> > I understand this behaviour, what's more, I expected something similar,
> > however I thought the internal buffer is much more bigger or at least it
> > can be enlargement. My application is working with video frames (~1MB).
> > I thought it can be possible to send a few messages (frames) to a queue
> > and receive it at once (sometimes sender is much more faster then
> > receiver which is very busy and is not able to receive all messages in
> > time).
> >
> > I assume, that application of non-blocking sending instead of MPI_Send()
> > resolve this problem. Is it true?
> >
> > Is the internal MPICH2 buffer fixed in size and cannot be enlargement?
>
> When you're using the sock channel you need to increase the tcp buffers.
> To do this you need to do this as root:
> echo 262142 > /proc/sys/net/core/rmem_max
> echo 262142 > /proc/sys/net/core/rmem_default
>
> In nemesis, set MPID_NEM_NUM_CELLS in
> src/mpid/ch3/channels/nemesis/nemesis/include/mpid_nem_datatypes.h
> Note, however, that cells are 64KB each, so be careful how much you
> increase this by.
>
>
> As I mentioned, I think a better solution would be to use MPI_Isend() if
> you can. With Isend, you won't be using the tcp buffers or
> shared-memory queues which are a limited resource. Instead the send
> queue will be stored in user memory, which is not limited (well, until
> you run out of memory :-) ). Of course with Isend, you'll have to make
> sure you call MPI_Test or MPI_Wait to free the requests and so you know
> when the send buffer can be reused.
This is exactly what I need. Thank you!
Jarek!
More information about the mpich-discuss
mailing list