Hi Wei-keng,<br><br>This is just an interface problem and not a hard limit of mpi-io. For example if I run the <br>same case on 4 tasks instead of 8, it works just fine (example attached).<br><br>If I create an mpi derived type such as for example an mpi_type_contiguous I can do the same call as below successfully. <br>
<br> int len = 322437120;<br>
double *buf = (double*) malloc(len * sizeof(double));<br>
int elemtype, err;<br> err = mpi_type_contiguous(len,mpi_double,elemtype);<br> ierr = mpi_type_commit(elemtype)<br>
err = MPI_File_write(fh, buf, 1, elemtype, &status);<br>
if (err != MPI_SUCCESS) {<br>
int errorStringLen;<br>
char errorString[MPI_MAX_ERROR_<blockquote class="gmail_quote">STRING];<br>
MPI_Error_string(err, errorString, &errorStringLen);<br>
printf("Error: MPI_File_write_at() (%s)\n",errorString);<br>
}<br>
</blockquote><br>
<br><br><br><br>It seems to me that every operation that pnetcdf can do using start and count can be described as an mpi_type_subarray which will both allow pnetcdf to avoid this interface limit and save a potentially considerable amount of memory.<br>
<br>- Jim<br><br><div class="gmail_quote">On Sun, Feb 17, 2013 at 10:10 PM, Wei-keng Liao <span dir="ltr"><<a href="mailto:wkliao@ece.northwestern.edu" target="_blank">wkliao@ece.northwestern.edu</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi, Jim,<br>
<br>
In your test program, each process is writing 322437120 or 322437202 doubles.<br>
so, 322437120 * sizeof(double) = 2,579,496,960 which is larger than 2^31, max<br>
for a signed 4-byte integer. It did cause 4-byte integer overflow in PnetCDF.<br>
But, even MPI-IO will have a problem with this size.<br>
<br>
If you try the code fragment below, ROMIO will throw an error class<br>
MPI_ERR_ARG, and error string "Invalid count argument".<br>
<br>
int len = 322437120;<br>
double *buf = (double*) malloc(len * sizeof(double));<br>
<br>
int err = MPI_File_write(fh, buf, len, MPI_DOUBLE, &status);<br>
if (err != MPI_SUCCESS) {<br>
int errorStringLen;<br>
char errorString[MPI_MAX_ERROR_STRING];<br>
MPI_Error_string(err, errorString, &errorStringLen);<br>
printf("Error: MPI_File_write_at() (%s)\n",errorString);<br>
}<br>
<br>
A possible PnetCDF solution is to detect the overflow and divide a large request<br>
into multiple, smaller ones, each with a upper bound of 2^31-1 bytes.<br>
Or PnetCDF can simply throw an error, like MPI-IO.<br>
<br>
Any suggestion?<br>
<br>
Wei-keng<br>
<div><div><br>
On Feb 17, 2013, at 1:34 PM, Jim Edwards wrote:<br>
<br>
> Found the problem in the test program, a corrected program is attached. This reminds me of another issue - the interface to nfmpi_iput_vara is not defined in pnetcdf.mod<br>
><br>
> - Jim<br>
><br>
> On Sun, Feb 17, 2013 at 11:43 AM, Jim Edwards <<a href="mailto:jedwards@ucar.edu" target="_blank">jedwards@ucar.edu</a>> wrote:<br>
> In my larger program I am getting an error:<br>
><br>
> PMPI_Type_create_struct(139): Invalid value for blocklen, must be non-negative but is -1715470336<br>
><br>
> I see a note about this in nonblocking.c:<br>
><br>
> for (j=0; j<reqs[i].varp->ndims; j++)<br>
> blocklens[i] *= reqs[i].count[j];<br>
> /* Warning! blocklens[i] might overflow */<br>
><br>
><br>
> But I tried to distile this into a small testcase and I'm getting a different error, I've attached the test program anyway because I can't spot any error there and think it must be in pnetcdf. Also it seems like instead of<br>
> calling mpi_type_create_struct you should be calling mpi_type_subarray which will avoid the problem of blocklens overflowing.<br>
><br>
> This test program is written for 8 mpi tasks, but it uses a lot of memory so you may need more than one node to run it.<br>
><br>
> --<br>
> Jim Edwards<br>
><br>
> CESM Software Engineering Group<br>
> National Center for Atmospheric Research<br>
> Boulder, CO<br>
> <a href="tel:303-497-1842" value="+13034971842" target="_blank">303-497-1842</a><br>
><br>
><br>
><br>
> --<br>
> Jim Edwards<br>
><br>
> CESM Software Engineering Group<br>
> National Center for Atmospheric Research<br>
> Boulder, CO<br>
> <a href="tel:303-497-1842" value="+13034971842" target="_blank">303-497-1842</a><br>
</div></div>> <testpnetcdf5.F90><br>
<br>
</blockquote></div><br><br clear="all"><br>-- <br><pre>Jim Edwards<br><br><br></pre>