[petsc-users] Irritating behavior of MUMPS with PETSc
Gunnar Jansen
jansen.gunnar at gmail.com
Wed Jun 25 08:44:23 CDT 2014
Hi,
i try to solve a problem in parallel with MUMPS as the direct solver. As
long as I run the program on only 1 node with 6 processors everything works
fine! But using 2 nodes with 3 processors each gets mumps stuck in the
factorization.
For the purpose of testing I run the ex2.c on a resolution of 100x100
(which is of course way to small for a direct solver in parallel).
The code is run with :
mpirun ./ex2 -on_error_abort -pc_type lu -pc_factor_mat_solver_package
mumps -ksp_type preonly -log_summary -options_left -m 100 -n 100
-mat_mumps_icntl_4 3
The petsc-configuration I used is:
--prefix=/opt/Petsc/3.4.4.extended --with-mpi=yes
--with-mpi-dir=/opt/Openmpi/1.9a/ --with-debugging=no --download-mumps
--download-scalapack --download-parmetis --download-metis
Is this common behavior? Or is there an error in the petsc configuration I
am using here?
Best,
Gunnar
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20140625/b85f9350/attachment.html>
More information about the petsc-users
mailing list