MPI_TYPE_MAX limit using MPT
Reiner Vogelsang
reiner at sgi.com
Thu Oct 14 01:05:40 CDT 2010
Hello,
the default number of MPI_TYPE_MAX is 1024 in MPT 1.25
The man page says clearly
MPI_TYPE_MAX
Determines the maximum number of data types that can
simultaneously exist for any single MPI process. Use this
variable to increase internal default limits. (This variable
might be required by standard-compliant programs.) MPI
generates an error message if this limit (or the default, if not
set) is exceeded.
Default: 1024
So, if one continue to commit new types there will be a point where
you run out of internal tables.
What helps is to increase that value by initializing the environment
variable MPI_TYPE_MAX with a higher value or
A second method is an inspection of the MPI code in order to
find ways to use MPI_Type_free to release unused MPI_TYPES.
Kind regards
Reiner
Nils Smeds wrote:
> Appears to be something in the SGI MPT that has been around for a long
> time and looks like a possible leak in the MPI library rather than in
> applications built on top of it?
>
> /Nils
>
> http://www.hdfgroup.org/ftp/HDF5/prev-releases/ReleaseFiles/release5-180
>
> * On IRIX6.5, when the C compiler version is greater than 7.4, complicated
> MPI derived datatype code will work. However, the user should increase
> the value of the MPI_TYPE_MAX environment variable to some appropriate
> value
> to use collective irregular selection code. For example, the current
> parallel HDF5 test needs to raise MPI_TYPE_MAX to 200,000 to pass the
> test.
>
>
> http://www.ks.uiuc.edu/Research/namd/2.6/notes.html
> setenv MPI_REQUEST_MAX 10240
> setenv MPI_TYPE_MAX 10240
>
> Then run NAMD with the following command:
>
> mpirun -np <procs> namd2 <configfile>
>
>
>
> http://spec.unipv.it/mpi/results/res2009q1/mpi2007-20090310-00118.csv
> " setenv MPI_TYPE_MAX 32768"
> " Determines the maximum number of data types that can"
> " simultaneously exist for any single MPI process."
> " MPI generates an error message if this limit (or the default,"
> " if not set) is exceeded. Default: 1024"
>
>
>
> ______________________________________________
> Nils Smeds, IBM Deep Computing / World Wide Coordinated Tuning Team
> IT Specialist, Mobile phone: +46-70-793 2639
> Fax. +46-8-793 9523
> Mail address: IBM Sweden; Loc. 5-03; 164 92 Stockholm; SWEDEN
>
>
>
> From: Maxwell Kelley <kelley at giss.nasa.gov>
> To: Rob Latham <robl at mcs.anl.gov>
> Cc: parallel-netcdf at lists.mcs.anl.gov
> Date: 10/06/2010 07:50 PM
> Subject: Re: MPI_TYPE_MAX limit using MPT
> Sent by: parallel-netcdf-bounces at lists.mcs.anl.gov
>
>
>
>
> My test was indeed using the nonblocking interface; I could re-code with
> the blocking interface if you think that would shed some light. The same
> test run with mvapich2 didn't encounter any problem. The MPI_TYPE_MAX
> issue is mentioned here
>
> http://lists.mcs.anl.gov/pipermail/mpich-discuss/2010-February/006647.html
>
> so perhaps it's not pnetcdf that is forgetting to free datatypes.
>
> -Max
>
> On Wed, 6 Oct 2010, Rob Latham wrote:
>
>> On Wed, Oct 06, 2010 at 12:29:50PM -0400, Maxwell Kelley wrote:
>>> Is this normal? Setting MPI_TYPE_MAX to 65536 simply allowed more
>>> I/O to be performed before the error appears. The limit is reached
>>> more quickly using more processors. Assuming that this is a case of
>>> types not being freed after use, should I just set this limit high
>>> enough that it will never be exceeded during a 12-hour batch job?
>> I wish we knew more about where the extra data types came from.
>>
>> I imagine there is some cost to setting MPI_TYPE_MAX to 2 billion.
>> Hopefully, you can find a value that lets you complete your work while
>> I try to find the places where pnetcdf forgets to free datatypes.
>>
>> Are you still using the nonblocking interface?
>>
>> ==rob
>>
>> --
>> Rob Latham
>> Mathematics and Computer Science Division
>> Argonne National Lab, IL USA
>>
>>
>
>
>
>
> Såvida annat inte anges ovan: / Unless stated otherwise above:
> IBM Svenska AB
> Organisationsnummer: 556026-6883
> Adress: 164 92 Stockholm
>
--
------------------------------------------------------------------------
_
)/___ _---_
=_/(___)_-__ ( )
/ /\\|/O[]/ \c O ( )
Reiner Vogelsang \__/ ----'\__/ ..o o O .o -_-
Senior System Engineer
Silicon Graphics GmbH Home Office:
Werner-von-Siemens-Ring 1 Lohfeldstr. 18
D-85630 Grasbrunn 52428 Juelich
Germany
VAT ID Number DE12946051
County Court Munich HRB 80748
Management Board: Robert Übelmesser
Phone, direct +49-2461-939265
Fax, direct +49-2461-939266
Phone, switchboard +49-89-46108-0
Fax, Munich +49-89-46108-222
Mobile phone +49-176-14610840
Skype reiner52428
email reiner at sgi.com
More information about the parallel-netcdf
mailing list