[petsc-users] Slow linear solver via MUMPS

Matthew Overholt overholt at capesim.com
Mon Jan 28 08:26:30 CST 2019


Hi Mohammad,

We tried the same thing for our finite element heat transfer code, and
experimented with both MUMPS and MKL's Cluster PARDISO for about a year,
and were very disappointed with how they scaled.

Give the full PETSc PCG solver with the ILU(0) preconditioner a try (pure
MPI, no hybrid MPI-OpenMP).  We found that it scales very well over two or
more nodes, and even though it is slower than MKL PARDISO on a single node,
its speedup is so much better over multiple MPI ranks that it quickly
overtakes the speed of the direct solvers.

ierr = KSPSetType(ksp, KSPCG);                       // And stick with the
default ILU preconditioner

The interconnect we've been using is 25 Gbps Ethernet, which is standard on
the AWS EC2 cloud.

Matt Overholt

On Fri, Jan 25, 2019 at 10:44 AM Mohammad Gohardoust via petsc-users <
petsc-users at mcs.anl.gov> wrote:

> Hi,
>
> I am trying to modify a "pure MPI" code for solving water movement
> equation in soils which employs KSP iterative solvers. This code gets
> really slow in the hpc I am testing it as I increase the number of
> calculating nodes (each node has 28 cores) even from 1 to 2. I went for
> implementing "MPI-OpenMP" solutions like MUMPS. I did this inside the petsc
> by:
>
> KSPSetType(ksp, KSPPREONLY);
> PCSetType(pc, PCLU);
> PCFactorSetMatSolverType(pc, MATSOLVERMUMPS);
> KSPSolve(ksp, ...
>
> and I run it through:
>
> export OMP_NUM_THREADS=16 && mpirun -n 2 ~/Programs/my_programs
>
> The code is working (in my own PC) but it is too slow (maybe about 50
> times slower). Since I am not an expert, I like to know is this what I
> should expect from MUMPS!?
>
> Thanks,
> Mohammad
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20190128/c3c039c8/attachment.html>


More information about the petsc-users mailing list