[mpich-discuss] Sending data many times, packing data or using derived type?
TAY wee-beng
zonexo at gmail.com
Mon Jun 4 15:20:10 CDT 2012
Hi Jed,
Thanks. For a while I thought I emailed to the wrong mailing list ;-)
I'll do a simple subroutine to check.
Yours sincerely,
TAY wee-beng
On 4/6/2012 5:02 PM, Jed Brown wrote:
> On Mon, Jun 4, 2012 at 9:52 AM, TAY wee-beng <zonexo at gmail.com
> <mailto:zonexo at gmail.com>> wrote:
>
> Hi,
>
> I am doing computational fluid dynamics and I have a 3D finite
> volume code. I have partitioned the data in the z direction. At
> times, I need to copy some boundary data (1 2d slice) from 1
> processor to another. They are velocities in u,v and w and
> contiguous in memory.
>
>
> I notice you frequently on the PETSc list so I'll point out that
> VecScatter (and DMDA for structured grids) handle this in a generic
> way that can be mapped to MPI in several different ways.
>
> Is it recommended to
>
> 1. send u,v,w as seperate MPI calls or
>
> 2. copy all the u,v,w data into a 1D array and send just once.
> Then copy and update the data or
>
> 3. Use derived type and group all these data together? Then copy
> and update the data.
>
> Which is a better choice? Does it depend on the size of the data?
> I think my cluster uses InifiniBand, if I'm not wrong.
>
>
> Interlacing u,v,w together in memory is generally better for serial
> performance because it reuses cache more effectively and keeps the
> number of memory streams manageable. It is also better for packing
> buffers because the relevant data is less scattered in memory.
>
> Relative performance of user packing versus datatypes is quite
> implementation- and hardware-dependent. You can implement both or just
> implement one and plan to write the other implementation if you have
> evidence that it will be tangibly better (and your time is best spent
> tuning at that level).
>
>
> _______________________________________________
> mpich-discuss mailing list mpich-discuss at mcs.anl.gov
> To manage subscription options or unsubscribe:
> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/mpich-discuss/attachments/20120604/bd20d422/attachment.html>
More information about the mpich-discuss
mailing list