[petsc-users] Full OpenMP strategy MUMPS

Piotr Sierant piotr.sierant at uj.edu.pl
Mon Jul 29 12:23:09 CDT 2019


Hello everyone,


I am trying to use PETSc with MUMPS in full openMP strategy motivated by data from

SciPost Phys. 5, 045 (2018) which suggest that I can reduce RAM usage for particular system sizes in this way.

I cannot get it to work in the way it is described at

https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MATSOLVERMUMPS.html

Having configured PETSc with the following options:

Configure options --download-mpich --download-scalapack --download-cmake --with-openmp --download-metis --download-mumps --with-threadsafety --with-log=0 --with-debugging=0 --download-hwloc --with-blaslapack-dir=/opt/intel/Compiler/19.0/compilers_and_libraries_2019.4.243/linux/mkl

(also with configured SLEPc which I need for shift-and-invert which requires LU decomposition from MUMPS) With OMP_NUM_THREADS=16 I execute:

mpirun -n 2 ./my_program -mat_mumps_use_omp_threads 8

obtaining

[0]PETSC ERROR: --------------------- Error Message --------------------------------------------------------------
[0]PETSC ERROR: Argument out of range
[0]PETSC ERROR: number of OpenMP threads 8 can not be < 1 or > the MPI shared memory communicator size 1

(...)

What should I do to be able to run the code with single (or few) MPI rank(s) and with MUMPS operating with 8 openMP threads? I will greatly appreciate any comments.


Thanks, Piotr





-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20190729/7caaf27a/attachment.html>


More information about the petsc-users mailing list