[petsc-users] Fwd: Questions using PCFIELDSPLIT

Jed Brown jedbrown at mcs.anl.gov
Sat Dec 14 13:19:36 CST 2013


Adriano Côrtes <adrimacortes at gmail.com> writes:

> 2013/12/14 Jed Brown <jedbrown at mcs.anl.gov>:
>> Adriano Côrtes <adrimacortes at gmail.com> writes:
>> Use -ksp_view to check that these are being used.  I recommend setting
>> all these options using run-time options unless your code is taking
>> active control over the choice of algorithm.
>>
>
>
> Actually it is being done. Check the -snes_view bellow
>
>  0 SNES Function norm 9.644918978350e-02
>     0 KSP Residual norm 1.297271248442e-01
>   1 SNES Function norm 9.644918978350e-02
> Nonlinear solve did not converge due to DIVERGED_LINEAR_SOLVE iterations 1
> SNES Object: 1 MPI processes
>   type: ksponly
>   maximum iterations=50, maximum function evaluations=10000
>   tolerances: relative=1e-05, absolute=1e-50, solution=1e-08
>   total number of linear solver iterations=1
>   total number of function evaluations=2
>   SNESLineSearch Object:   1 MPI processes
>     type: basic
>     maxstep=1.000000e+08, minlambda=1.000000e-12
>     tolerances: relative=1.000000e-08, absolute=1.000000e-15,
> lambda=1.000000e-08
>     maximum iterations=1
>   KSP Object:   1 MPI processes
>     type: minres
>     maximum iterations=10000, initial guess is zero
>     tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
>     left preconditioning
>     using PRECONDITIONED norm type for convergence test
>   PC Object:   1 MPI processes
>     type: fieldsplit
>       FieldSplit with Schur preconditioner, factorization DIAG
>       Preconditioner for the Schur complement formed from user provided matrix
>       Split info:
>       Split number 0 Defined by IS
>       Split number 1 Defined by IS
>       KSP solver for A00 block
>         KSP Object:        (fieldsplit_u_)         1 MPI processes
>           type: cg
>           maximum iterations=10000, initial guess is zero
>           tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
>           left preconditioning
>           using PRECONDITIONED norm type for convergence test
>         PC Object:        (fieldsplit_u_)         1 MPI processes
>           type: jacobi
>           linear system matrix = precond matrix:
>           Matrix Object:           1 MPI processes
>             type: seqaij
>             rows=220, cols=220
>             total: nonzeros=11552, allocated nonzeros=11552
>             total number of mallocs used during MatSetValues calls =0
>               not using I-node routines
>       KSP solver for S = A11 - A10 inv(A00) A01
>         KSP Object:        (fieldsplit_p_)         1 MPI processes
>           type: cg
>           maximum iterations=10000, initial guess is zero
>           tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
>           left preconditioning
>           using PRECONDITIONED norm type for convergence test
>         PC Object:        (fieldsplit_p_)         1 MPI processes
>           type: jacobi
>           linear system matrix followed by preconditioner matrix:
>           Matrix Object:           1 MPI processes
>             type: schurcomplement

See that "fieldsplit_p_" uses a MATSCHURCOMPLEMENT and actually iterates
on that reduced system.  This is different from what is typically done
in full-space iterative methods.

>             rows=100, cols=100
>               Schur complement A11 - A10 inv(A00) A01
>               A11
>                 Matrix Object:                 1 MPI processes
>                   type: seqaij
>                   rows=100, cols=100
>                   total: nonzeros=1936, allocated nonzeros=1936
>                   total number of mallocs used during MatSetValues calls =0
>                     not using I-node routines
>               A10
>                 Matrix Object:                 1 MPI processes
>                   type: seqaij
>                   rows=100, cols=220
>                   total: nonzeros=4752, allocated nonzeros=4752
>                   total number of mallocs used during MatSetValues calls =0
>                     not using I-node routines
>               KSP of A00
>                 KSP Object:                (fieldsplit_u_)
>     1 MPI processes
>                   type: cg
>                   maximum iterations=10000, initial guess is zero
>                   tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
>                   left preconditioning
>                   using PRECONDITIONED norm type for convergence test
>                 PC Object:                (fieldsplit_u_)
>    1 MPI processes
>                   type: jacobi
>                   linear system matrix = precond matrix:
>                   Matrix Object:                   1 MPI processes
>                     type: seqaij
>                     rows=220, cols=220
>                     total: nonzeros=11552, allocated nonzeros=11552
>                     total number of mallocs used during MatSetValues calls =0
>                       not using I-node routines
>               A01
>                 Matrix Object:                 1 MPI processes
>                   type: seqaij
>                   rows=220, cols=100
>                   total: nonzeros=4752, allocated nonzeros=4752
>                   total number of mallocs used during MatSetValues calls =0
>                     not using I-node routines
>           Matrix Object:           1 MPI processes
>             type: seqaij
>             rows=100, cols=100
>             total: nonzeros=1936, allocated nonzeros=1936
>             total number of mallocs used during MatSetValues calls =0
>               not using I-node routines
>     linear system matrix = precond matrix:
>     Matrix Object:     1 MPI processes
>       type: seqaij
>       rows=320, cols=320
>       total: nonzeros=22992, allocated nonzeros=24996
>       total number of mallocs used during MatSetValues calls =1156
>         not using I-node routines
> L2 errors: u=0.0132378 p=0.0593741
>
>>>   where I'm providing the pressure mass matrix as a preconditioner for S.
>>
>> Where are you putting it?
>>
>
> I'm calling a callback function to compute Q (pressure mass matrix) and calling
> ierr = PCFieldSplitSchurPrecondition(pc,schurpre,Q);CHKERRQ(ierr);
> where schurpre = PC_FIELDSPLIT_SCHUR_PRE_USER;
>
>>> When I run with
>>>
>>> -snes_type ksponly
>>> -ksp_type minres
>>> -pc_type fieldsplit
>>>
>>>  #Velocity
>>> -fieldsplit_u_ksp_type cg
>>> -fieldsplit_u_pc_type jacobi
>>> -fieldsplit_u_ksp_rtol 1e-05
>>> -fieldsplit_u_ksp_monitor
>>>
>>> #Pressure
>>> -fieldsplit_p_ksp_type cg
>>> -fieldsplit_p_pc_type jacobi
>>> -fieldsplit_p_ksp_rtol 1e-05
>>> -fieldsplit_p_ksp_monitor
>>>
>>> the solution does not converge
>>
>> "does not converge" is to imprecise to say anything.  What exactly do
>> you observe?
>>
>
> Since it's an analytical solution I'm computing the L2 Norm. When I
> run with -ksp_type minres I get
>
> 0 SNES Function norm 9.644918978350e-02
>     0 KSP Residual norm 1.297271248442e-01
>   1 SNES Function norm 9.644918978350e-02
> Nonlinear solve did not converge due to DIVERGED_LINEAR_SOLVE iterations 1

Use -ksp_converged_reason to find out why the linear solve above did not
converge.  Maybe due to an indefinite preconditioner?

> L2 errors: u=0.0132378 p=0.0593741
>
> and when I run with -ksp_type gmres I get
>
> 0 SNES Function norm 9.644918978350e-02
>     0 KSP Residual norm 1.297271248442e-01
>     1 KSP Residual norm 1.269027428700e-01
>     2 KSP Residual norm 1.177913273071e-01
>     3 KSP Residual norm 4.746342719564e-05
>     4 KSP Residual norm 4.242617241067e-05
>     5 KSP Residual norm 9.541981683274e-07
>   1 SNES Function norm 1.468436883950e-06
> Nonlinear solve converged due to CONVERGED_ITS iterations 1
> L2 errors: u=4.097e-05 p=8.95462e-05
>
>
>>> (i'm testing an analytical one), but when I use -ksp_type gmres it
>>> converges.
>>
>> Are you sure your matrices are symmetric?  (Including boundary conditions.)
>>
>
> It supposed to be, but now that you asked I will check it, because
> this can be why minres isn't working for me.

Yes that or if the preconditioner is not SPD.  That could be as simple
as getting the sign wrong in a preconditioning matrix.

>>> If you guys want I can dump both monitors outputs on a file an attach
>>> on the email.
>>>
>>> My second question is: what options I have to set up if I want to use as
>>> preconditionr for M the matrix
>>>
>>>         [  A    0  ]
>>> P =   [            ]
>>>         [  0    Q  ]
>>>
>>> where again Q is the pressure mass matrix, and with A and Q being solved by
>>> CG with BJACOBI.
>>
>> I can't tell for sure from what you've sent, but PCFIELDSPLIT is likely
>> iterating on the actual Schur complement S = - B A^{-1} B^T,
>> preconditioned by your matrix Q.
>
> I guess I did not make myself clear. It was supposed to be a new
> question, that is, what I'm doing now is what you said above, but
> since I'm using an inf-sup stable discretization Q and S are spectral
> equivalent, what I'd like to do after is to precondition the Stokes
> system with A and Q, that is using ksp(A) and ksp(Q) to apply the
> block-diagonal preconditioner.

If you want to use an iterative method for Q, you can skip over the
matrix-free Schur complement operator by using something like
-fieldsplit_p_ksp_type preonly -fieldsplit_p_pc_type ksp, though usually
one V-cycle is enough and keep the iteration linear regardless of
tolerances.  You might look at the src/ksp/ksp/examples/tutorials/ex42.c
runs (see the makefile or ex42*.opts) for examples of solving similar
systems.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 835 bytes
Desc: not available
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20131214/28d5af34/attachment.pgp>


More information about the petsc-users mailing list