[petsc-users] MPI+OpenMP in PETSC

Barry Smith bsmith at mcs.anl.gov
Wed Nov 19 13:39:54 CST 2014


> On Nov 19, 2014, at 11:43 AM, Evan Um <evanum at gmail.com> wrote:
> 
> Dear PETSC users,
> 
> I would like to ask a question about using an external library with MPI+OpenMP in PETSC. For example, within PETSC, I want to use MUMPS with MPI+OpenMP. This means that if one node has 12 MPI processes and 24GB, MUMPS uses 4 MPI processes with 6GB and each MPI process has 3 threads. To do this, what should be done from PETSC installation? After using MUMPS in this way, is there a way to convert from MPI+OpenMP to flat MPI? Thank you!

 You just need to launch your MPI (PETSc) program so that it uses only 4 MPI processes per node (how do this this depends on your system, it may be options to mpirun or it may be options in a batch system). Then you need tell MUMPS to use 3 OpenMP threads (this is probably a OpenMP environmental variable). Note that these two things that need to be set are not really directly related to PETSc and we cannot set them automatically.

  It is essentially impossible to convert from MPI+OpenMP to flat MPI*. You will need to just use those 4 MPI processes per node for the MPI part.

   Barry

* yes you could use MPI_Spawn and stuff but it won't do what you want. 
> 
> Regards,
> Evan
> 
> 
> 



More information about the petsc-users mailing list