I'm using mpich2 1.4.1p1 romio: 1.2.6<br><br>In your test program the first MPI_File_write works but the second gives:<br><br>Error: MPI_File_write (ddtype) Invalid argument, error stack:<br>MPI_FILE_WRITE(100): Invalid count argument<br>
<br><br><div class="gmail_quote">On Mon, Feb 18, 2013 at 10:52 AM, Wei-keng Liao <span dir="ltr"><<a href="mailto:wkliao@ece.northwestern.edu" target="_blank">wkliao@ece.northwestern.edu</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi, Jim,<br>
<br>
I tested your codes with 4 MPI processes and got the error below.<br>
MPI_FILE_WRITE_ALL(105): Invalid count argument<br>
<br>
Maybe you are using IBM's MPI-IO? (I am using MPICH.)<br>
<br>
Can you try the attached Fortran program? (run on 1 process).<br>
I got error below.<br>
<br>
Error: MPI_File_write MPI_DOUBLE Invalid argument, error stack:<br>
MPI_FILE_WRITE(102): Invalid count argument<br>
Error: MPI_File_write (ddtype) Invalid argument, error stack:<br>
MPI_FILE_WRITE(102): Invalid count argument<br>
<br>
<br><br>
<br>
Wei-keng<br>
<br>
On Feb 18, 2013, at 9:00 AM, Jim Edwards wrote:<br>
<br>
> Hi Wei-keng,<br>
><br>
> This is just an interface problem and not a hard limit of mpi-io. For example if I run the<br>
> same case on 4 tasks instead of 8, it works just fine (example attached).<br>
><br>
> If I create an mpi derived type such as for example an mpi_type_contiguous I can do the same call as below successfully.<br>
><br>
> int len = 322437120;<br>
> double *buf = (double*) malloc(len * sizeof(double));<br>
> int elemtype, err;<br>
> err = mpi_type_contiguous(len,mpi_double,elemtype);<br>
> ierr = mpi_type_commit(elemtype)<br>
> err = MPI_File_write(fh, buf, 1, elemtype, &status);<br>
> if (err != MPI_SUCCESS) {<br>
> int errorStringLen;<br>
> char errorString[MPI_MAX_ERROR_<br>
> STRING];<br>
> MPI_Error_string(err, errorString, &errorStringLen);<br>
> printf("Error: MPI_File_write_at() (%s)\n",errorString);<br>
> }<br>
><br>
><br>
><br>
><br>
><br>
> It seems to me that every operation that pnetcdf can do using start and count can be described as an mpi_type_subarray which will both allow pnetcdf to avoid this interface limit and save a potentially considerable amount of memory.<br>
><br>
> - Jim<br>
><br>
> On Sun, Feb 17, 2013 at 10:10 PM, Wei-keng Liao <<a href="mailto:wkliao@ece.northwestern.edu" target="_blank">wkliao@ece.northwestern.edu</a>> wrote:<br>
> Hi, Jim,<br>
><br>
> In your test program, each process is writing 322437120 or 322437202 doubles.<br>
> so, 322437120 * sizeof(double) = 2,579,496,960 which is larger than 2^31, max<br>
> for a signed 4-byte integer. It did cause 4-byte integer overflow in PnetCDF.<br>
> But, even MPI-IO will have a problem with this size.<br>
><br>
> If you try the code fragment below, ROMIO will throw an error class<br>
> MPI_ERR_ARG, and error string "Invalid count argument".<br>
><br>
> int len = 322437120;<br>
> double *buf = (double*) malloc(len * sizeof(double));<br>
><br>
> int err = MPI_File_write(fh, buf, len, MPI_DOUBLE, &status);<br>
> if (err != MPI_SUCCESS) {<br>
> int errorStringLen;<br>
> char errorString[MPI_MAX_ERROR_STRING];<br>
> MPI_Error_string(err, errorString, &errorStringLen);<br>
> printf("Error: MPI_File_write_at() (%s)\n",errorString);<br>
> }<br>
><br>
> A possible PnetCDF solution is to detect the overflow and divide a large request<br>
> into multiple, smaller ones, each with a upper bound of 2^31-1 bytes.<br>
> Or PnetCDF can simply throw an error, like MPI-IO.<br>
><br>
> Any suggestion?<br>
><br>
> Wei-keng<br>
><br>
> On Feb 17, 2013, at 1:34 PM, Jim Edwards wrote:<br>
><br>
> > Found the problem in the test program, a corrected program is attached. This reminds me of another issue - the interface to nfmpi_iput_vara is not defined in pnetcdf.mod<br>
> ><br>
> > - Jim<br>
> ><br>
> > On Sun, Feb 17, 2013 at 11:43 AM, Jim Edwards <<a href="mailto:jedwards@ucar.edu" target="_blank">jedwards@ucar.edu</a>> wrote:<br>
> > In my larger program I am getting an error:<br>
> ><br>
> > PMPI_Type_create_struct(139): Invalid value for blocklen, must be non-negative but is -1715470336<br>
> ><br>
> > I see a note about this in nonblocking.c:<br>
> ><br>
> > for (j=0; j<reqs[i].varp->ndims; j++)<br>
> > blocklens[i] *= reqs[i].count[j];<br>
> > /* Warning! blocklens[i] might overflow */<br>
> ><br>
> ><br>
> > But I tried to distile this into a small testcase and I'm getting a different error, I've attached the test program anyway because I can't spot any error there and think it must be in pnetcdf. Also it seems like instead of<br>
> > calling mpi_type_create_struct you should be calling mpi_type_subarray which will avoid the problem of blocklens overflowing.<br>
> ><br>
> > This test program is written for 8 mpi tasks, but it uses a lot of memory so you may need more than one node to run it.<br>
> ><br>
> > --<br>
> > Jim Edwards<br>
> ><br>
> > CESM Software Engineering Group<br>
> > National Center for Atmospheric Research<br>
> > Boulder, CO<br>
> > <a href="tel:303-497-1842" value="+13034971842" target="_blank">303-497-1842</a><br>
> ><br>
> ><br>
> ><br>
> > --<br>
> > Jim Edwards<br>
> ><br>
> > CESM Software Engineering Group<br>
> > National Center for Atmospheric Research<br>
> > Boulder, CO<br>
> > <a href="tel:303-497-1842" value="+13034971842" target="_blank">303-497-1842</a><br>
> > <testpnetcdf5.F90><br>
><br>
><br>
><br>
><br>
> --<br>
> Jim Edwards<br>
><br>
><br>
> <testpnetcdf5.F90><br>
<br>
<br></blockquote></div><br><br clear="all"><br>-- <br><pre>Jim Edwards<br><br><br></pre>