[petsc-users] Fieldsplit with sub pc MUMPS in parallel
Barry Smith
bsmith at mcs.anl.gov
Wed Jan 4 18:36:29 CST 2017
There is something wrong with your set up.
1 process
total: nonzeros=140616, allocated nonzeros=140616
total: nonzeros=68940, allocated nonzeros=68940
total: nonzeros=3584, allocated nonzeros=3584
total: nonzeros=1000, allocated nonzeros=1000
total: nonzeros=8400, allocated nonzeros=8400
2 processes
total: nonzeros=146498, allocated nonzeros=146498
total: nonzeros=73470, allocated nonzeros=73470
total: nonzeros=3038, allocated nonzeros=3038
total: nonzeros=1110, allocated nonzeros=1110
total: nonzeros=6080, allocated nonzeros=6080
total: nonzeros=146498, allocated nonzeros=146498
total: nonzeros=73470, allocated nonzeros=73470
total: nonzeros=6080, allocated nonzeros=6080
total: nonzeros=2846, allocated nonzeros=2846
total: nonzeros=86740, allocated nonzeros=94187
It looks like you are setting up the problem differently in parallel and seq. If it is suppose to be an identical problem then the number nonzeros should be the same in at least the first two matrices.
> On Jan 4, 2017, at 3:39 PM, Karin&NiKo <niko.karin at gmail.com> wrote:
>
> Dear Petsc team,
>
> I am (still) trying to solve Biot's poroelasticity problem :
> <image.png>
>
> I am using a mixed P2-P1 finite element discretization. The matrix of the discretized system in binary format is attached to this email.
>
> I am using the fieldsplit framework to solve the linear system. Since I am facing some troubles, I have decided to go back to simple things. Here are the options I am using :
>
> -ksp_rtol 1.0e-5
> -ksp_type fgmres
> -pc_type fieldsplit
> -pc_fieldsplit_schur_factorization_type full
> -pc_fieldsplit_type schur
> -pc_fieldsplit_schur_precondition selfp
> -fieldsplit_0_pc_type lu
> -fieldsplit_0_pc_factor_mat_solver_package mumps
> -fieldsplit_0_ksp_type preonly
> -fieldsplit_0_ksp_converged_reason
> -fieldsplit_1_pc_type lu
> -fieldsplit_1_pc_factor_mat_solver_package mumps
> -fieldsplit_1_ksp_type preonly
> -fieldsplit_1_ksp_converged_reason
>
> On a single proc, everything runs fine : the solver converges in 3 iterations, according to the theory (see Run-1-proc.txt [contains -log_view]).
>
> On 2 procs, the solver converges in 28 iterations (see Run-2-proc.txt).
>
> On 3 procs, the solver converges in 91 iterations (see Run-3-proc.txt).
>
> I do not understand this behavior : since MUMPS is a parallel direct solver, shouldn't the solver converge in max 3 iterations whatever the number of procs?
>
>
> Thanks for your precious help,
> Nicolas
>
> <Run-1-proc.txt><Run-2-proc.txt><Run-3-proc.txt><1_Warning.txt>
More information about the petsc-users
mailing list