[petsc-users] Irritating behavior of MUMPS with PETSc

Dave May dave.mayhem23 at gmail.com
Wed Jun 25 08:52:39 CDT 2014


This sounds weird.

The launch line you provided doesn't include any information regarding how
many processors (nodes/nodes per core to use). I presume you are using a
queuing system. My guess is that there could be an issue with either (i)
your job script, (ii) the configuration of the job scheduler on the
machine, or (iii) the mpi installation on the machine.

Have you been able to successfully run other petsc (or any mpi) codes with
the same launch options (2 nodes, 3 procs per node)?

Cheers.
  Dave




On 25 June 2014 15:44, Gunnar Jansen <jansen.gunnar at gmail.com> wrote:

> Hi,
>
> i try to solve a problem in parallel with MUMPS as the direct solver. As
> long as I run the program on only 1 node with 6 processors everything works
> fine! But using 2 nodes with 3 processors each gets mumps stuck in the
> factorization.
>
> For the purpose of testing I run the ex2.c on a resolution of 100x100
> (which is of course way to small for a direct solver in parallel).
>
> The code is run with :
> mpirun ./ex2 -on_error_abort -pc_type lu -pc_factor_mat_solver_package
> mumps -ksp_type preonly -log_summary -options_left -m 100 -n 100
> -mat_mumps_icntl_4 3
>
> The petsc-configuration I used is:
> --prefix=/opt/Petsc/3.4.4.extended --with-mpi=yes
> --with-mpi-dir=/opt/Openmpi/1.9a/ --with-debugging=no --download-mumps
>  --download-scalapack --download-parmetis --download-metis
>
> Is this common behavior? Or is there an error in the petsc configuration I
> am using here?
>
> Best,
> Gunnar
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20140625/fe5173a0/attachment.html>


More information about the petsc-users mailing list