<div>Dear Rajeev, </div><br><div class="gmail_quote">On Wed, Sep 7, 2011 at 3:27 PM, Nick Stokes <span dir="ltr"><<a href="mailto:randomaccessiterator@gmail.com">randomaccessiterator@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
<br><br><div class="gmail_quote"><div class="im">On Wed, Sep 7, 2011 at 3:22 PM, Rajeev Thakur <span dir="ltr"><<a href="mailto:thakur@mcs.anl.gov" target="_blank">thakur@mcs.anl.gov</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
MPI_Type_create_hindexed does.<br>
<br></blockquote><div><br></div></div><div></div></div></blockquote><div><br></div><div>Moreover, on my 32-bit system MPI_Aint is typedef'ed to int (at least in my installation of MPICH2 - 1.3.3 on Windows 32-bit). So hindexed type seems not to solve the large file problem in general. If it were possible to create an indexed type with MPI_Offset displacements, that would have helped. </div>
<div><br></div><div>Maybe I am on the wrong track here. Does my question make sense? Perhaps a more explicit and perhaps more relate-able example would be as follows: </div><div><br></div><div>Imagine a simulation where there are a total of 10^9 degrees of freedom, and double precision is used. The computation result is to be output to a single file, which would add up to 8 GBs. MPI is utilized to implement parallelization via domain partitioning, and each partition holds a certain chunk of data. These chunks correspond to non-overlapping regions with gaps in the final output. How would you do a collective READ/WRITE in such a case without integer overflow?</div>
<div><br></div><div>thanks many</div><div><br></div></div>