<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=UTF-8">
</head>
<body>
I'm building & testing PNetCDF 1.12.1 inside a Docker container,
and running under SLURM+PMIx.<br>
I'm seeing a problem that I hadn't seen when I do this in a
"regular" Linux (Ubuntu or CentOS) environment where we used Lmod to
manage the libraries.<br>
For this SLURM/Docker/PMIx arrangement, I invoke the container
environment with either of these commands:<br>
<blockquote><tt>srun --mpi=pmix --container-image=$DOCKER -t
08:00:00 -p batch --pty /bin/bash -i -l</tt><br>
<tt>srun --mpi=none --container-image=$DOCKER -t 08:00:00 -p
batch --pty /bin/bash -i -l</tt><br>
</blockquote>
(I'm using interactive sessions until I get all the bugs worked out.
Then everything will be scripted.)<br>
If I run one of the regression-tests manually<br>
<blockquote><tt>cd test/C</tt><br>
<tt>./pres_temp_4D_wr</tt><br>
</blockquote>
I get this error<br>
<blockquote><tt>[circe-n047:12619] OPAL ERROR: Not initialized in
file pmix3x_client.c at line 112</tt><br>
<tt>--------------------------------------------------------------------------</tt><br>
<tt>The application appears to have been direct launched using
"srun",</tt><br>
<tt>but OMPI was not built with SLURM's PMI support and therefore
cannot</tt><br>
<tt>execute. There are several options for building PMI support
under</tt><br>
<tt>SLURM, depending upon the SLURM version you are using:</tt><br>
<br>
<tt> version 16.05 or later: you can use SLURM's PMIx support.
This</tt><br>
<tt> requires that you configure and build SLURM --with-pmix.</tt><br>
<br>
<tt> Versions earlier than 16.05: you must use either SLURM's
PMI-1 or</tt><br>
<tt> PMI-2 support. SLURM builds PMI-1 by default, or you can
manually</tt><br>
<tt> install PMI-2. You must then build Open MPI using --with-pmi
pointing</tt><br>
<tt> to the SLURM PMI library location.</tt><br>
<br>
<tt>Please configure as appropriate and try again.</tt><br>
<tt>--------------------------------------------------------------------------</tt><br>
<tt>*** An error occurred in MPI_Init</tt><br>
<tt>*** on a NULL communicator</tt><br>
<tt>*** MPI_ERRORS_ARE_FATAL (processes in this communicator will
now abort,</tt><br>
<tt>*** and potentially your MPI job)</tt><br>
<tt>[circe-n047:12619] Local abort before MPI_INIT completed
completed successfully, but am not able to aggregate error
messages, and not able to guarantee that all other processes
were killed!</tt><br>
</blockquote>
If I use <tt>mpirun</tt><br>
<blockquote><tt><tt>cd test/C<br>
</tt>export OMPI_ALLOW_RUN_AS_ROOT=1</tt><br>
<tt>export OMPI_ALLOW_RUN_AS_ROOT_CONFIRM=1</tt><br>
<tt>export TESTSEQRUN="mpirun -n 1"</tt><br>
<tt>mpirun -n 1 pres_temp_4D_wr</tt><br>
</blockquote>
it looks like it works:<br>
<blockquote><tt>*** TESTING C pres_temp_4D_wr for writing classic
file ------ pass</tt><br>
</blockquote>
and I'd be ok to work this way (although I'd still like to know why
the problem showed up in the first place).<br>
But if I try to roll all this together and put the exports in my
build-and-test script<br>
<blockquote><tt>export OMPI_ALLOW_RUN_AS_ROOT=1</tt><br>
<tt>
</tt><tt>export OMPI_ALLOW_RUN_AS_ROOT_CONFIRM=1</tt><br>
<b><tt>
</tt></b><b><tt>export TESTSEQRUN="mpirun -n 1"</tt></b><br>
<tt>make -i -k check</tt><br>
</blockquote>
I get the original failure with 100% of the test cases. I'm guessing
that the<tt> TESTSEQRUN </tt>variable is getting cleared somehow.
How is it being set, normally?<br>
I'm also a bit puzzled about the explanation<br>
<blockquote><tt>TESTSEQRUN Run command (on one MPI process) for
"make check" <b>on</b></tt><br>
<tt> <b>cross-compile environment.</b> Example:
"aprun -n 1". [default:</tt><br>
<tt> none]</tt><br>
<tt>TESTMPIRUN MPI run command for "make ptest", [default:
mpiexec -n NP]</tt><br>
</blockquote>
I'm not really cross-compiling here, since the Docker environment
should look identical on every system I run it on.<br>
(I'm building & testing in one session on one system here,
anyway, so it should be a native compile).<br>
But the environment could look a little unusual compared with a
"regular" Linux system.<br>
Is there some other setting that I need to make?<br>
Also, with the <tt>TESTMPIRUN</tt> case, does this mean I have to
explicitly set the number of MPI procs ("mpirun -n 2" etc.)?<br>
Is there a way to express the same default that the test-harness
would normally use?<br>
Thanks,<br>
<br>
Carl<br>
<br>
<br>
<DIV>
<HR>
</DIV>
<DIV>This email message is for the sole use of the intended recipient(s) and may
contain confidential information. Any unauthorized review, use, disclosure
or distribution is prohibited. If you are not the intended recipient,
please contact the sender by reply email and destroy all copies of the original
message. </DIV>
<DIV>
<HR>
</DIV>
</body>
</html>