Regression-tests failing inside a container
Carl Ponder
cponder at nvidia.com
Thu Mar 19 12:48:54 CDT 2020
I'm building & testing PNetCDF 1.12.1 inside a Docker container, and
running under SLURM+PMIx.
I'm seeing a problem that I hadn't seen when I do this in a "regular"
Linux (Ubuntu or CentOS) environment where we used Lmod to manage the
libraries.
For this SLURM/Docker/PMIx arrangement, I invoke the container
environment with either of these commands:
srun --mpi=pmix --container-image=$DOCKER -t 08:00:00 -p batch
--pty /bin/bash -i -l
srun --mpi=none --container-image=$DOCKER -t 08:00:00 -p batch --pty
/bin/bash -i -l
(I'm using interactive sessions until I get all the bugs worked out.
Then everything will be scripted.)
If I run one of the regression-tests manually
cd test/C
./pres_temp_4D_wr
I get this error
[circe-n047:12619] OPAL ERROR: Not initialized in file
pmix3x_client.c at line 112
--------------------------------------------------------------------------
The application appears to have been direct launched using "srun",
but OMPI was not built with SLURM's PMI support and therefore cannot
execute. There are several options for building PMI support under
SLURM, depending upon the SLURM version you are using:
version 16.05 or later: you can use SLURM's PMIx support. This
requires that you configure and build SLURM --with-pmix.
Versions earlier than 16.05: you must use either SLURM's PMI-1 or
PMI-2 support. SLURM builds PMI-1 by default, or you can manually
install PMI-2. You must then build Open MPI using --with-pmi pointing
to the SLURM PMI library location.
Please configure as appropriate and try again.
--------------------------------------------------------------------------
*** An error occurred in MPI_Init
*** on a NULL communicator
*** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
*** and potentially your MPI job)
[circe-n047:12619] Local abort before MPI_INIT completed completed
successfully, but am not able to aggregate error messages, and not
able to guarantee that all other processes were killed!
If I use mpirun
cd test/C
export OMPI_ALLOW_RUN_AS_ROOT=1
export OMPI_ALLOW_RUN_AS_ROOT_CONFIRM=1
export TESTSEQRUN="mpirun -n 1"
mpirun -n 1 pres_temp_4D_wr
it looks like it works:
*** TESTING C pres_temp_4D_wr for writing classic file
------ pass
and I'd be ok to work this way (although I'd still like to know why the
problem showed up in the first place).
But if I try to roll all this together and put the exports in my
build-and-test script
export OMPI_ALLOW_RUN_AS_ROOT=1
export OMPI_ALLOW_RUN_AS_ROOT_CONFIRM=1
***export TESTSEQRUN="mpirun -n 1"*
make -i -k check
I get the original failure with 100% of the test cases. I'm guessing
that theTESTSEQRUN variable is getting cleared somehow. How is it being
set, normally?
I'm also a bit puzzled about the explanation
TESTSEQRUN Run command (on one MPI process) for "make check" *on*
*cross-compile environment.* Example: "aprun -n 1". [default:
none]
TESTMPIRUN MPI run command for "make ptest", [default: mpiexec -n NP]
I'm not really cross-compiling here, since the Docker environment should
look identical on every system I run it on.
(I'm building & testing in one session on one system here, anyway, so it
should be a native compile).
But the environment could look a little unusual compared with a
"regular" Linux system.
Is there some other setting that I need to make?
Also, with the TESTMPIRUN case, does this mean I have to explicitly set
the number of MPI procs ("mpirun -n 2" etc.)?
Is there a way to express the same default that the test-harness would
normally use?
Thanks,
Carl
-----------------------------------------------------------------------------------
This email message is for the sole use of the intended recipient(s) and may contain
confidential information. Any unauthorized review, use, disclosure or distribution
is prohibited. If you are not the intended recipient, please contact the sender by
reply email and destroy all copies of the original message.
-----------------------------------------------------------------------------------
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/parallel-netcdf/attachments/20200319/9a5c9b87/attachment.html>
More information about the parallel-netcdf
mailing list