MPI_TYPE_MAX limit using MPT

Maxwell Kelley kelley at giss.nasa.gov
Wed Oct 6 16:06:12 CDT 2010


FWIW, the same test using the blocking interface hit the MPI_TYPE_MAX 
limit at about the same point.

On Wed, 6 Oct 2010, Maxwell Kelley wrote:

>
> My test was indeed using the nonblocking interface; I could re-code with the 
> blocking interface if you think that would shed some light. The same test run 
> with mvapich2 didn't encounter any problem.  The MPI_TYPE_MAX issue is 
> mentioned here
>
> http://lists.mcs.anl.gov/pipermail/mpich-discuss/2010-February/006647.html
>
> so perhaps it's not pnetcdf that is forgetting to free datatypes.
>
> -Max
>
> On Wed, 6 Oct 2010, Rob Latham wrote:
>
>> On Wed, Oct 06, 2010 at 12:29:50PM -0400, Maxwell Kelley wrote:
>>> Is this normal?  Setting MPI_TYPE_MAX to 65536 simply allowed more
>>> I/O to be performed before the error appears. The limit is reached
>>> more quickly using more processors.  Assuming that this is a case of
>>> types not being freed after use, should I just set this limit high
>>> enough that it will never be exceeded during a 12-hour batch job?
>> 
>> I wish we knew more about where the extra data types came from.
>> 
>> I imagine there is some cost to setting MPI_TYPE_MAX to 2 billion.
>> Hopefully, you can find a value that lets you complete your work while
>> I try to find the places where pnetcdf forgets to free datatypes.
>> 
>> Are you still using the nonblocking interface?
>> 
>> ==rob
>> 
>> -- 
>> Rob Latham
>> Mathematics and Computer Science Division
>> Argonne National Lab, IL USA
>> 
>> 
>



More information about the parallel-netcdf mailing list