[mpich-discuss] ROMIO with SGI MPI MPI_TYPE_MAX
Rajeev Thakur
thakur at mcs.anl.gov
Fri Feb 26 16:17:49 CST 2010
That blurb has been in the user guide since 1997!
Rajeev
> -----Original Message-----
> From: mpich-discuss-bounces at mcs.anl.gov
> [mailto:mpich-discuss-bounces at mcs.anl.gov] On Behalf Of burlen
> Sent: Friday, February 26, 2010 3:46 PM
> To: mpich-discuss at mcs.anl.gov
> Subject: [mpich-discuss] ROMIO with SGI MPI MPI_TYPE_MAX
>
> Hi All,
>
> I have been running into some issues when using SGI MPI and
> collective
> IO. My program always crashes with the error
>
> MPI has run out of internal datatype entries.
>
> It's quite quick with the default 8192, upping the value does prolong
> the life of the program but it will still crash after about
> 300 ish type
> commit/read/type free operations. Which to me doesn't seem
> like a very
> large number for an HPC app.
>
> Also I have found a way to reproduce this after only two
> explicit type
> commit in my application. There are some point to point
> communication as
> well but none of them commit any types. And we're talking
> about only a
> handful of MPI calls issued. This reproduction depends on how many
> process per node are scheduled , eg 1 per core. I have been
> careful to
> see that I'm not leaking types, so my conclusion is that SGI MPI is
> leaking types internally.
>
> I was surprised to see a blurb about just this issue in ROMIO
> user manual.
>
> Can anyone out there shed some light on this?
>
> Thanks
> Burlen
>
> . When using ROMIO with SGI MPI, you may sometimes get an
> error message
> from SGI
> MPI: "MPI has run out of internal datatype entries. Please set the
> environment variable
> MPI TYPE MAX for additional space." If you get this error
> message, add
> the following line to
> your .cshrc ?le:
> setenv MPI TYPE MAX 65536
> Use a larger number if you still get the error message.
>
> _______________________________________________
> mpich-discuss mailing list
> mpich-discuss at mcs.anl.gov
> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
>
More information about the mpich-discuss
mailing list