[petsc-users] Fwd: Fieldsplit with sub pc MUMPS in parallel

Dave May dave.mayhem23 at gmail.com
Wed Jan 4 16:06:12 CST 2017


The issue is your fieldsplit_1 solve. You are applying mumps to an
approximate Schur complement - not the true Schur complement. Seemingly the
approximation is dependent on the communicator size.

If you want to see iteration counts of 2, independent of mesh size and
communicator size you need to solve the true Schur complement system
(fieldsplit_1) to a specified tolerance (Erik 1e-10) - don't use preonly.

In practice you probably don't want to iterate on the Schur complement
either as it is likely too expensive. If you provided fieldsplit with a
spectrally equivalent approximation to S, iteration counts would be larger
than two, but they would be independent of the number of elements and comm
size

Thanks,
  Dave




On Wed, 4 Jan 2017 at 22:39, Karin&NiKo <niko.karin at gmail.com> wrote:

> Dear Petsc team,
>
> I am (still) trying to solve Biot's poroelasticity problem :
>  [image: Images intégrées 1]
>
> I am using a mixed P2-P1 finite element discretization. The matrix of the
> discretized system in binary format is attached to this email.
>
> I am using the fieldsplit framework to solve the linear system. Since I am
> facing some troubles, I have decided to go back to simple things. Here are
> the options I am using :
>
> -ksp_rtol 1.0e-5
> -ksp_type fgmres
> -pc_type fieldsplit
> -pc_fieldsplit_schur_factorization_type full
> -pc_fieldsplit_type schur
> -pc_fieldsplit_schur_precondition selfp
> -fieldsplit_0_pc_type lu
> -fieldsplit_0_pc_factor_mat_solver_package mumps
> -fieldsplit_0_ksp_type preonly
> -fieldsplit_0_ksp_converged_reason
> -fieldsplit_1_pc_type lu
> -fieldsplit_1_pc_factor_mat_solver_package mumps
> -fieldsplit_1_ksp_type preonly
> -fieldsplit_1_ksp_converged_reason
>
> On a single proc, everything runs fine : the solver converges in 3
> iterations, according to the theory (see Run-1-proc.txt [contains
> -log_view]).
>
> On 2 procs, the solver converges in 28 iterations (see Run-2-proc.txt).
>
> On 3 procs, the solver converges in 91 iterations (see Run-3-proc.txt).
>
> I do not understand this behavior : since MUMPS is a parallel direct
> solver, shouldn't the solver converge in max 3 iterations whatever the
> number of procs?
>
>
> Thanks for your precious help,
> Nicolas
>
>
>
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20170104/8e6fd50e/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image.png
Type: image/png
Size: 9086 bytes
Desc: not available
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20170104/8e6fd50e/attachment.png>


More information about the petsc-users mailing list