MPI_TYPE_MAX limit using MPT

Nils Smeds nils.smeds at se.ibm.com
Thu Oct 14 05:40:53 CDT 2010


If you are interested in investigating - the following wrapper may give 
you some information about the number of types used.

mpicc -c wrap_some.c
ar -rcv libwrap.a wrap_some.o

and then some experimenting on how to get your libwrap.a into the linking 
after the Fortran MPI library, but before the C implementations of it.

It will print a summary at the end of type commits/frees. And if you want 
intermediate values you can use MPI_Type_Stats_{commit,free,delta,peak}

/Nils

$ cat wrap_some.c 
#include "mpi.h"
#include <stdio.h>

static long ncommit=0,nfree=0,ndelta=0,npeak=0;

int MPI_Type_commit( MPI_Datatype *datatype ) {
   ncommit++;
   if(ndelta==npeak) npeak++;
   ndelta++;
   return PMPI_Type_commit(datatype);
}

int MPI_Type_free( MPI_Datatype *datatype ) {
   nfree--;
   ndelta--;
   return PMPI_Type_free(datatype);
}

int MPI_Finalize(){
   int myid;
   MPI_Comm_rank(MPI_COMM_WORLD,&myid);
   printf("Rank: %4d  Free: %8ld  Commit: %8ld  Balance: %8ld  Peak: 
%8ld\n",
       myid,nfree,ncommit,ndelta,npeak);
   return PMPI_Finalize();
}

long MPI_Type_Stats_commit() {
   return ncommit;
}

long MPI_Type_Stats_free() {
   return nfree;
}

long MPI_Type_Stats_peak() {
   return npeak;
}

long MPI_Type_Stats_delta() {
   return ndelta;
}
______________________________________________
Nils Smeds,  IBM Deep Computing / World Wide Coordinated Tuning Team
IT Specialist, Mobile phone: +46-70-793 2639
Fax. +46-8-793 9523
Mail address: IBM Sweden; Loc. 5-03; 164 92 Stockholm; SWEDEN



From:   Maxwell Kelley <kelley at giss.nasa.gov>
To:     Rob Latham <robl at mcs.anl.gov>
Cc:     parallel-netcdf at lists.mcs.anl.gov
Date:   10/06/2010 11:11 PM
Subject:        Re: MPI_TYPE_MAX limit using MPT
Sent by:        parallel-netcdf-bounces at lists.mcs.anl.gov




FWIW, the same test using the blocking interface hit the MPI_TYPE_MAX 
limit at about the same point.

On Wed, 6 Oct 2010, Maxwell Kelley wrote:

>
> My test was indeed using the nonblocking interface; I could re-code with 
the 
> blocking interface if you think that would shed some light. The same 
test run 
> with mvapich2 didn't encounter any problem.  The MPI_TYPE_MAX issue is 
> mentioned here
>
> 
http://lists.mcs.anl.gov/pipermail/mpich-discuss/2010-February/006647.html
>
> so perhaps it's not pnetcdf that is forgetting to free datatypes.
>
> -Max
>
> On Wed, 6 Oct 2010, Rob Latham wrote:
>
>> On Wed, Oct 06, 2010 at 12:29:50PM -0400, Maxwell Kelley wrote:
>>> Is this normal?  Setting MPI_TYPE_MAX to 65536 simply allowed more
>>> I/O to be performed before the error appears. The limit is reached
>>> more quickly using more processors.  Assuming that this is a case of
>>> types not being freed after use, should I just set this limit high
>>> enough that it will never be exceeded during a 12-hour batch job?
>> 
>> I wish we knew more about where the extra data types came from.
>> 
>> I imagine there is some cost to setting MPI_TYPE_MAX to 2 billion.
>> Hopefully, you can find a value that lets you complete your work while
>> I try to find the places where pnetcdf forgets to free datatypes.
>> 
>> Are you still using the nonblocking interface?
>> 
>> ==rob
>> 
>> -- 
>> Rob Latham
>> Mathematics and Computer Science Division
>> Argonne National Lab, IL USA
>> 
>> 
>




Såvida annat inte anges ovan: / Unless stated otherwise above:
IBM Svenska AB
Organisationsnummer: 556026-6883
Adress: 164 92 Stockholm
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/parallel-netcdf/attachments/20101014/91405d48/attachment.htm>


More information about the parallel-netcdf mailing list