[mpich-discuss] MPI I/O using one big struct for file I/O or a series of views

Nick Stokes randomaccessiterator at gmail.com
Wed Sep 7 14:25:11 CDT 2011


Hi all,

I have a (CFD) application where a large discretized domain is split into
pieces (subdomains) and have multiple fields defined on them.  In each
subdomain, the local data is contiguous in memory and corresponds to a set
of (monotonically increasing) indices of the whole domain.

I need to output these fields into a single file were each field is globally
contigous and are appended one global field after another.  I.e.:

For 3 fields u, v, w, in memory of rank p:   {(up1,up2, up3, up4),
(vp1,vp2,vp3,vp4), (wp1,wp2,wp3,wp4)}
in file:
 ((up1,uq1,uq2,up1,ur1,ur2,up2,uq3,...),(vp1,vq1,vq2,vp1,vr1,vr2,vp2,vq3,...),(wp1,wq1,wq2,wp1,wr1,wr2,wp2,wq3,...))
for each rank p,q,r, so on where {} indicates a non-contiguous collection,
and () indicates a contiguous one.

Would there be any difference in performance in doing as  many collective
outputs as there are fields with repeated File_set_view and File_write_all
(option 1), versus a single shot collective I/O with aggregated MPI_Types?
 And if there is, how much would the performance depend on the platform (say
eg. on Lustre vs local disks), is there any way to tell roughly without
doing detailed benchmarks?

In code this would be: http://pastebin.com/iprRnub9

Any comments would help greatly! Thank you,

- Nick
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/mpich-discuss/attachments/20110907/f0e95d9a/attachment.htm>


More information about the mpich-discuss mailing list