[petsc-users] Fwd: Fieldsplit with sub pc MUMPS in parallel

Barry Smith bsmith at mcs.anl.gov
Wed Jan 4 17:32:44 CST 2017


> On Jan 4, 2017, at 4:06 PM, Dave May <dave.mayhem23 at gmail.com> wrote:
> 
> The issue is your fieldsplit_1 solve. You are applying mumps to an approximate Schur complement - not the true Schur complement. Seemingly the approximation is dependent on the communicator size.

    Yes, but why and how is it dependent on the communicator size? From the output

Preconditioner for the Schur complement formed from Sp, an assembled approximation to S, which uses (lumped, if requested) A00's diagonal's inverse.   

   Note to PETSc developers: this output is horrible and needs to be fixed. "(lumped, if requested)" WTF! if lumped was requested the output should just say lumping was used, if lumping was not used it shouldn't say anything!! I've fixed this in master.

  if Sp = A11 - A10* inv(diagonal(A00))*A01 shouldn't this be independent of the number of processes?

  Another note to PETSc developers: I am thinking PCFIELDSPLIT is way to complex. Perhaps it should be factored into a PCSCHURCOMPLEMENT that only does 2 by 2 blocks via Schur complements and a PCFIELDSPLIT that does non-Schur complement methods for any number of blocks.

   Barry








> 
> If you want to see iteration counts of 2, independent of mesh size and communicator size you need to solve the true Schur complement system (fieldsplit_1) to a specified tolerance (Erik 1e-10) - don't use preonly.
> 
> In practice you probably don't want to iterate on the Schur complement either as it is likely too expensive. If you provided fieldsplit with a spectrally equivalent approximation to S, iteration counts would be larger than two, but they would be independent of the number of elements and comm size
> 
> Thanks,
>   Dave
> 
> 
> 
> 
> On Wed, 4 Jan 2017 at 22:39, Karin&NiKo <niko.karin at gmail.com> wrote:
> Dear Petsc team,
> 
> I am (still) trying to solve Biot's poroelasticity problem :
>  <image.png>
> 
> I am using a mixed P2-P1 finite element discretization. The matrix of the discretized system in binary format is attached to this email.
> 
> I am using the fieldsplit framework to solve the linear system. Since I am facing some troubles, I have decided to go back to simple things. Here are the options I am using :
> 
> -ksp_rtol 1.0e-5
> -ksp_type fgmres
> -pc_type fieldsplit
> -pc_fieldsplit_schur_factorization_type full
> -pc_fieldsplit_type schur
> -pc_fieldsplit_schur_precondition selfp
> -fieldsplit_0_pc_type lu
> -fieldsplit_0_pc_factor_mat_solver_package mumps
> -fieldsplit_0_ksp_type preonly
> -fieldsplit_0_ksp_converged_reason
> -fieldsplit_1_pc_type lu
> -fieldsplit_1_pc_factor_mat_solver_package mumps
> -fieldsplit_1_ksp_type preonly
> -fieldsplit_1_ksp_converged_reason
> 
> On a single proc, everything runs fine : the solver converges in 3 iterations, according to the theory (see Run-1-proc.txt [contains -log_view]).
> 
> On 2 procs, the solver converges in 28 iterations (see Run-2-proc.txt).
> 
> On 3 procs, the solver converges in 91 iterations (see Run-3-proc.txt).
> 
> I do not understand this behavior : since MUMPS is a parallel direct solver, shouldn't the solver converge in max 3 iterations whatever the number of procs?
> 
> 
> Thanks for your precious help,
> Nicolas
> 
> 
> 
> 
> 



More information about the petsc-users mailing list