[petsc-users] monitoring the convergence of fieldsplit 0 and 1

Klaij, Christiaan C.Klaij at marin.nl
Tue May 15 04:25:01 CDT 2018


Matt,


Just a reminder. With petsc-3.8.4 the issue is still there.


Chris


dr. ir. Christiaan Klaij | Senior Researcher | Research & Development
MARIN | T +31 317 49 33 44 | C.Klaij at marin.nl<mailto:C.Klaij at marin.nl> | www.marin.nl<http://www.marin.nl>

[LinkedIn]<https://www.linkedin.com/company/marin> [YouTube] <http://www.youtube.com/marinmultimedia>  [Twitter] <https://twitter.com/MARIN_nieuws>  [Facebook] <https://www.facebook.com/marin.wageningen>
MARIN news: MARIN at WindDays 2018, June 13 & 14, Rotterdam<http://www.marin.nl/web/News/News-items/MARIN-at-WindDays-2018-June-13-14-Rotterdam.htm>

________________________________
From: Matthew Knepley <knepley at gmail.com>
Sent: Wednesday, January 18, 2017 4:13 PM
To: Klaij, Christiaan
Cc: Lawrence Mitchell; petsc-users at mcs.anl.gov
Subject: Re: [petsc-users] monitoring the convergence of fieldsplit 0 and 1

On Wed, Jan 18, 2017 at 4:42 AM, Klaij, Christiaan <C.Klaij at marin.nl<mailto:C.Klaij at marin.nl>> wrote:
Thanks Lawrence, that nicely explains the unexpected behaviour!

I guess in general there ought to be getters for the four
ksp(A00)'s that occur in the full factorization.

Yes, we will fix it. I think that the default retrieval should get the 00 block, not the inner as well.

   Matt


Chris


dr. ir. Christiaan Klaij  | CFD Researcher | Research & Development
MARIN | T +31 317 49 33 44<tel:%2B31%20317%2049%2033%2044> | mailto:C.Klaij at marin.nl<mailto:C.Klaij at marin.nl> | http://www.marin.nl

MARIN news: http://www.marin.nl/web/News/News-items/Verification-and-validation-exercises-for-flow-around-KVLCC2-tanker.htm

________________________________________
From: Lawrence Mitchell <lawrence.mitchell at imperial.ac.uk<mailto:lawrence.mitchell at imperial.ac.uk>>
Sent: Wednesday, January 18, 2017 10:59 AM
To: petsc-users at mcs.anl.gov<mailto:petsc-users at mcs.anl.gov>
Cc: bsmith at mcs.anl.gov<mailto:bsmith at mcs.anl.gov>; Klaij, Christiaan
Subject: Re: [petsc-users] monitoring the convergence of fieldsplit 0 and 1

On 18/01/17 08:40, Klaij, Christiaan wrote:
> Barry,
>
> I've managed to replicate the problem with 3.7.4
> snes/examples/tutorials/ex70.c. Basically I've added
> KSPGetTotalIterations to main (file is attached):

PCFieldSplitGetSubKSP returns, in the Schur case:

MatSchurComplementGet(pc->schur, &ksp);

in subksp[0]

and

pc->schur in subksp[1]

In your case, subksp[0] is the (preonly) approximation to A^{-1} *inside*

S = D - C A_inner^{-1} B

And subksp[1] is the approximation to S^{-1}.

Since each application of S to a vector (required in S^{-1}) requires
one application of A^{-1}, because you use 225 iterations in total to
invert S, you also use 225 applications of the KSP on A_inner.

There doesn't appear to be a way to get the KSP used for A^{-1} if
you've asked for different approximations to A^{-1} in the 0,0 block
and inside S.

Cheers,

Lawrence

> $ diff -u ex70.c.bak ex70.c
> --- ex70.c.bak2017-01-18 09:25:46.286174830 +0100
> +++ ex70.c2017-01-18 09:03:40.904483434 +0100
> @@ -669,6 +669,10 @@
>    KSP            ksp;
>    PetscErrorCode ierr;
>
> +  KSP            *subksp;
> +  PC             pc;
> +  PetscInt       numsplit = 1, nusediter_vv, nusediter_pp;
> +
>    ierr     = PetscInitialize(&argc, &argv, NULL, help);CHKERRQ(ierr);
>    s.nx     = 4;
>    s.ny     = 6;
> @@ -690,6 +694,13 @@
>    ierr = StokesSetupPC(&s, ksp);CHKERRQ(ierr);
>    ierr = KSPSolve(ksp, s.b, s.x);CHKERRQ(ierr);
>
> +  ierr = KSPGetPC(ksp, &pc);CHKERRQ(ierr);
> +  ierr = PCFieldSplitGetSubKSP(pc,&numsplit,&subksp); CHKERRQ(ierr);
> +  ierr = KSPGetTotalIterations(subksp[0],&nusediter_vv); CHKERRQ(ierr);
> +  ierr = KSPGetTotalIterations(subksp[1],&nusediter_pp); CHKERRQ(ierr);
> +  ierr = PetscPrintf(PETSC_COMM_WORLD," total u solves = %i\n", nusediter_vv); CHKERRQ(ierr);
> +  ierr = PetscPrintf(PETSC_COMM_WORLD," total p solves = %i\n", nusediter_pp); CHKERRQ(ierr);
> +
>    /* don't trust, verify! */
>    ierr = StokesCalcResidual(&s);CHKERRQ(ierr);
>    ierr = StokesCalcError(&s);CHKERRQ(ierr);
>
> Now run as follows:
>
> $ mpirun -n 2 ./ex70 -ksp_type fgmres -pc_type fieldsplit -pc_fieldsplit_type schur -pc_fieldsplit_schur_fact_type lower -fieldsplit_0_ksp_type gmres -fieldsplit_0_pc_type bjacobi -fieldsplit_1_pc_type jacobi -fieldsplit_1_inner_ksp_type preonly -fieldsplit_1_inner_pc_type jacobi -fieldsplit_1_upper_ksp_type preonly -fieldsplit_1_upper_pc_type jacobi -fieldsplit_0_ksp_converged_reason -fieldsplit_1_ksp_converged_reason
>   Linear fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 5
>   Linear fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 14
>   Linear fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 5
>   Linear fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 14
>   Linear fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 5
>   Linear fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 16
>   Linear fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 5
>   Linear fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 16
>   Linear fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 5
>   Linear fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 17
>   Linear fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 5
>   Linear fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 18
>   Linear fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 5
>   Linear fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 20
>   Linear fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 5
>   Linear fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 21
>   Linear fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 5
>   Linear fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 23
>   Linear fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 5
>   Linear fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 22
>   Linear fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 5
>   Linear fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 22
>   Linear fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 5
>   Linear fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 22
>  total u solves = 225
>  total p solves = 225
>  residual u = 9.67257e-06
>  residual p = 5.42082e-07
>  residual [u,p] = 9.68775e-06
>  discretization error u = 0.0106464
>  discretization error p = 1.85907
>  discretization error [u,p] = 1.8591
>
> So here again the total of 225 is correct for p, but for u it
> should be 60. Hope this helps you find the problem.
>
> Chris
>
>
>
> dr. ir. Christiaan Klaij  | CFD Researcher | Research & Development
> MARIN | T +31 317 49 33 44 | mailto:C.Klaij at marin.nl<mailto:C.Klaij at marin.nl> | http://www.marin.nl
>
> MARIN news: http://www.marin.nl/web/News/News-items/Few-places-left-for-Offshore-and-Ship-hydrodynamics-courses.htm
>
> ________________________________________
> From: Klaij, Christiaan
> Sent: Tuesday, January 17, 2017 8:45 AM
> To: Barry Smith
> Cc: petsc-users at mcs.anl.gov<mailto:petsc-users at mcs.anl.gov>
> Subject: Re: [petsc-users] monitoring the convergence of fieldsplit 0 and 1
>
> Well, that's it, all the rest was hard coded. Here's the relevant part of the code:
>
>    CALL PCSetType(pc_system,PCFIELDSPLIT,ierr); CHKERRQ(ierr)
>   CALL PCFieldSplitSetType(pc_system,PC_COMPOSITE_SCHUR,ierr); CHKERRQ(ierr)
>   CALL PCFieldSplitSetIS(pc_system,"0",isgs(1),ierr); CHKERRQ(ierr)
>   CALL PCFieldSplitSetIS(pc_system,"1",isgs(2),ierr); CHKERRQ(ierr)
>   CALL PCFieldSplitSetSchurFactType(pc_system,PC_FIELDSPLIT_SCHUR_FACT_FULL,ierr);CHKERRQ(ierr)
>   CALL PCFieldSplitSetSchurPre(pc_system,PC_FIELDSPLIT_SCHUR_PRE_SELFP,PETSC_NULL_OBJECT,ierr);CHKERRQ(ierr)
>
>   CALL KSPSetTolerances(ksp_system,tol,PETSC_DEFAULT_REAL,PETSC_DEFAULT_REAL,maxiter,ierr); CHKERRQ(ierr)
>   CALL PetscOptionsSetValue(PETSC_NULL_OBJECT,"-sys_fieldsplit_0_ksp_rtol","0.01",ierr); CHKERRQ(ierr)
>   CALL PetscOptionsSetValue(PETSC_NULL_OBJECT,"-sys_fieldsplit_1_ksp_rtol","0.01",ierr); CHKERRQ(ierr)
>
>   CALL PetscOptionsSetValue(PETSC_NULL_OBJECT,"-sys_fieldsplit_0_ksp_pc_side","right",ierr); CHKERRQ(ierr)
>   CALL PetscOptionsSetValue(PETSC_NULL_OBJECT,"-sys_fieldsplit_1_ksp_pc_side","right",ierr); CHKERRQ(ierr)
>
>   CALL PetscOptionsSetValue(PETSC_NULL_OBJECT,"-sys_fieldsplit_0_ksp_type","gmres",ierr); CHKERRQ(ierr)
>   CALL PetscOptionsSetValue(PETSC_NULL_OBJECT,"-sys_fieldsplit_1_upper_ksp_type","preonly",ierr); CHKERRQ(ierr)
>   CALL PetscOptionsSetValue(PETSC_NULL_OBJECT,"-sys_fieldsplit_1_upper_pc_type","jacobi",ierr); CHKERRQ(ierr)
>
>   CALL PetscOptionsSetValue(PETSC_NULL_OBJECT,"-sys_fieldsplit_1_inner_ksp_type","preonly",ierr); CHKERRQ(ierr)
>   CALL PetscOptionsSetValue(PETSC_NULL_OBJECT,"-sys_fieldsplit_1_inner_pc_type","jacobi",ierr); CHKERRQ(ierr)
>
> ________________________________________
> From: Barry Smith <bsmith at mcs.anl.gov<mailto:bsmith at mcs.anl.gov>>
> Sent: Monday, January 16, 2017 9:28 PM
> To: Klaij, Christiaan
> Cc: petsc-users at mcs.anl.gov<mailto:petsc-users at mcs.anl.gov>
> Subject: Re: [petsc-users] monitoring the convergence of fieldsplit 0 and 1
>
>    Please send all the command line options you use.
>
>
>> On Jan 16, 2017, at 1:47 AM, Klaij, Christiaan <C.Klaij at marin.nl<mailto:C.Klaij at marin.nl>> wrote:
>>
>> Barry,
>>
>> Sure, here's the output with:
>>
>> -sys_ksp_view -sys_ksp_converged_reason -sys_fieldsplit_0_ksp_converged_reason -sys_fieldsplit_1_ksp_converged_reason
>>
>> (In my previous email, I rearranged 0 & 1 for easy summing.)
>>
>> Chris
>>
>>  Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 1
>>  Linear sys_fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 22
>>  Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 1
>>  Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 2
>>  Linear sys_fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 6
>>  Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 2
>>  Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 7
>>  Linear sys_fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 3
>>  Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 7
>>  Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 7
>>  Linear sys_fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 2
>>  Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 7
>>  Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 8
>>  Linear sys_fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 2
>>  Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 8
>>  Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 8
>>  Linear sys_fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 2
>>  Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 8
>> Linear sys_ solve converged due to CONVERGED_RTOL iterations 6
>> KSP Object:(sys_) 1 MPI processes
>>  type: fgmres
>>    GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement
>>    GMRES: happy breakdown tolerance 1e-30
>>  maximum iterations=300, initial guess is zero
>>  tolerances:  relative=0.01, absolute=1e-50, divergence=10000.
>>  right preconditioning
>>  using UNPRECONDITIONED norm type for convergence test
>> PC Object:(sys_) 1 MPI processes
>>  type: fieldsplit
>>    FieldSplit with Schur preconditioner, factorization FULL
>>    Preconditioner for the Schur complement formed from Sp, an assembled approximation to S, which uses (lumped, if requested) A00's diagonal's inverse
>>    Split info:
>>    Split number 0 Defined by IS
>>    Split number 1 Defined by IS
>>    KSP solver for A00 block
>>      KSP Object:      (sys_fieldsplit_0_)       1 MPI processes
>>        type: gmres
>>          GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement
>>          GMRES: happy breakdown tolerance 1e-30
>>        maximum iterations=10000, initial guess is zero
>>        tolerances:  relative=0.01, absolute=1e-50, divergence=10000.
>>        right preconditioning
>>        using UNPRECONDITIONED norm type for convergence test
>>      PC Object:      (sys_fieldsplit_0_)       1 MPI processes
>>        type: ilu
>>          ILU: out-of-place factorization
>>          0 levels of fill
>>          tolerance for zero pivot 2.22045e-14
>>          matrix ordering: natural
>>          factor fill ratio given 1., needed 1.
>>            Factored matrix follows:
>>              Mat Object:               1 MPI processes
>>                type: seqaij
>>                rows=9600, cols=9600
>>                package used to perform factorization: petsc
>>                total: nonzeros=47280, allocated nonzeros=47280
>>                total number of mallocs used during MatSetValues calls =0
>>                  not using I-node routines
>>        linear system matrix = precond matrix:
>>        Mat Object:        (sys_fieldsplit_0_)         1 MPI processes
>>          type: seqaij
>>          rows=9600, cols=9600
>>          total: nonzeros=47280, allocated nonzeros=47280
>>          total number of mallocs used during MatSetValues calls =0
>>            not using I-node routines
>>    KSP solver for upper A00 in upper triangular factor
>>      KSP Object:      (sys_fieldsplit_1_upper_)       1 MPI processes
>>        type: preonly
>>        maximum iterations=10000, initial guess is zero
>>        tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
>>        left preconditioning
>>        using NONE norm type for convergence test
>>      PC Object:      (sys_fieldsplit_1_upper_)       1 MPI processes
>>        type: jacobi
>>        linear system matrix = precond matrix:
>>        Mat Object:        (sys_fieldsplit_0_)         1 MPI processes
>>          type: seqaij
>>          rows=9600, cols=9600
>>          total: nonzeros=47280, allocated nonzeros=47280
>>          total number of mallocs used during MatSetValues calls =0
>>            not using I-node routines
>>    KSP solver for S = A11 - A10 inv(A00) A01
>>      KSP Object:      (sys_fieldsplit_1_)       1 MPI processes
>>        type: gmres
>>          GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement
>>          GMRES: happy breakdown tolerance 1e-30
>>        maximum iterations=10000, initial guess is zero
>>        tolerances:  relative=0.01, absolute=1e-50, divergence=10000.
>>        right preconditioning
>>        using UNPRECONDITIONED norm type for convergence test
>>      PC Object:      (sys_fieldsplit_1_)       1 MPI processes
>>        type: ilu
>>          ILU: out-of-place factorization
>>          0 levels of fill
>>          tolerance for zero pivot 2.22045e-14
>>          matrix ordering: natural
>>          factor fill ratio given 1., needed 1.
>>            Factored matrix follows:
>>              Mat Object:               1 MPI processes
>>                type: seqaij
>>                rows=3200, cols=3200
>>                package used to perform factorization: petsc
>>                total: nonzeros=40404, allocated nonzeros=40404
>>                total number of mallocs used during MatSetValues calls =0
>>                  not using I-node routines
>>        linear system matrix followed by preconditioner matrix:
>>        Mat Object:        (sys_fieldsplit_1_)         1 MPI processes
>>          type: schurcomplement
>>          rows=3200, cols=3200
>>            Schur complement A11 - A10 inv(A00) A01
>>            A11
>>              Mat Object:              (sys_fieldsplit_1_)               1 MPI processes
>>                type: seqaij
>>                rows=3200, cols=3200
>>                total: nonzeros=40404, allocated nonzeros=40404
>>                total number of mallocs used during MatSetValues calls =0
>>                  not using I-node routines
>>            A10
>>              Mat Object:               1 MPI processes
>>                type: seqaij
>>                rows=3200, cols=9600
>>                total: nonzeros=47280, allocated nonzeros=47280
>>                total number of mallocs used during MatSetValues calls =0
>>                  not using I-node routines
>>            KSP of A00
>>              KSP Object:              (sys_fieldsplit_1_inner_)               1 MPI processes
>>                type: preonly
>>                maximum iterations=10000, initial guess is zero
>>                tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
>>                left preconditioning
>>                using NONE norm type for convergence test
>>              PC Object:              (sys_fieldsplit_1_inner_)               1 MPI processes
>>                type: jacobi
>>                linear system matrix = precond matrix:
>>                Mat Object:                (sys_fieldsplit_0_)                 1 MPI processes
>>                  type: seqaij
>>                  rows=9600, cols=9600
>>                  total: nonzeros=47280, allocated nonzeros=47280
>>                  total number of mallocs used during MatSetValues calls =0
>>                    not using I-node routines
>>            A01
>>              Mat Object:               1 MPI processes
>>                type: seqaij
>>                rows=9600, cols=3200
>>                total: nonzeros=47280, allocated nonzeros=47280
>>                total number of mallocs used during MatSetValues calls =0
>>                  not using I-node routines
>>        Mat Object:         1 MPI processes
>>          type: seqaij
>>          rows=3200, cols=3200
>>          total: nonzeros=40404, allocated nonzeros=40404
>>          total number of mallocs used during MatSetValues calls =0
>>            not using I-node routines
>>  linear system matrix followed by preconditioner matrix:
>>  Mat Object:   1 MPI processes
>>    type: nest
>>    rows=12800, cols=12800
>>      Matrix object:
>>        type=nest, rows=2, cols=2
>>        MatNest structure:
>>        (0,0) : prefix="mom_", type=seqaij, rows=9600, cols=9600
>>        (0,1) : prefix="grad_", type=seqaij, rows=9600, cols=3200
>>        (1,0) : prefix="div_", type=seqaij, rows=3200, cols=9600
>>        (1,1) : prefix="stab_", type=seqaij, rows=3200, cols=3200
>>  Mat Object:   1 MPI processes
>>    type: nest
>>    rows=12800, cols=12800
>>      Matrix object:
>>        type=nest, rows=2, cols=2
>>        MatNest structure:
>>        (0,0) : prefix="sys_fieldsplit_0_", type=seqaij, rows=9600, cols=9600
>>        (0,1) : type=seqaij, rows=9600, cols=3200
>>        (1,0) : type=seqaij, rows=3200, cols=9600
>>        (1,1) : prefix="sys_fieldsplit_1_", type=seqaij, rows=3200, cols=3200
>> nusediter_vv          37
>> nusediter_pp          37
>>
>>
>>
>> dr. ir. Christiaan Klaij  | CFD Researcher | Research & Development
>> MARIN | T +31 317 49 33 44 | mailto:C.Klaij at marin.nl<mailto:C.Klaij at marin.nl> | http://www.marin.nl
>>
>> MARIN news: http://www.marin.nl/web/News/News-items/The-Ocean-Cleanup-testing-continues.htm
>>
>> ________________________________________
>> From: Barry Smith <bsmith at mcs.anl.gov<mailto:bsmith at mcs.anl.gov>>
>> Sent: Friday, January 13, 2017 7:51 PM
>> To: Klaij, Christiaan
>> Cc: petsc-users at mcs.anl.gov<mailto:petsc-users at mcs.anl.gov>
>> Subject: Re: [petsc-users] monitoring the convergence of fieldsplit 0 and 1
>>
>>   Yes, I would have expected this to work. Could you send the output from -ksp_view in this case?
>>
>>
>>> On Jan 13, 2017, at 3:46 AM, Klaij, Christiaan <C.Klaij at marin.nl<mailto:C.Klaij at marin.nl>> wrote:
>>>
>>> Barry,
>>>
>>> It's been a while but I'm finally using this function in
>>> 3.7.4. Is it supposed to work with fieldsplit? Here's why.
>>>
>>> I'm solving a Navier-Stokes system with fieldsplit (pc has one
>>> velocity solve and one pressure solve) and trying to retrieve the
>>> totals like this:
>>>
>>> CALL KSPSolve(ksp_system,rr_system,xx_system,ierr); CHKERRQ(ierr)
>>> CALL PCFieldSplitGetSubKSP(pc_system,numsplit,subksp,ierr); CHKERRQ(ierr)
>>> CALL KSPGetTotalIterations(subksp(1),nusediter_vv,ierr); CHKERRQ(ierr)
>>> CALL KSPGetTotalIterations(subksp(2),nusediter_pp,ierr); CHKERRQ(ierr)
>>> print *, 'nusediter_vv', nusediter_vv
>>> print *, 'nusediter_pp', nusediter_pp
>>>
>>> Running the code shows this surprise:
>>>
>>> Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 1
>>> Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 1
>>> Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 2
>>> Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 2
>>> Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 7
>>> Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 7
>>> Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 7
>>> Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 7
>>> Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 8
>>> Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 8
>>> Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 8
>>> Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 8
>>>
>>> Linear sys_fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 22
>>> Linear sys_fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 6
>>> Linear sys_fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 3
>>> Linear sys_fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 2
>>> Linear sys_fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 2
>>> Linear sys_fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 2
>>>
>>> nusediter_vv          37
>>> nusediter_pp          37
>>>
>>> So the value of nusediter_pp is indeed 37, but for nusediter_vv
>>> it should be 66. Any idea what went wrong?
>>>
>>> Chris
>>>
>>>
>>>
>>> dr. ir. Christiaan Klaij  | CFD Researcher | Research & Development
>>> MARIN | T +31 317 49 33 44 | mailto:C.Klaij at marin.nl<mailto:C.Klaij at marin.nl> | http://www.marin.nl
>>>
>>> MARIN news: http://www.marin.nl/web/News/News-items/MARIN-wishes-you-a-challenging-inspiring-2017.htm
>>>
>>> ________________________________________
>>> From: Barry Smith <bsmith at mcs.anl.gov<mailto:bsmith at mcs.anl.gov>>
>>> Sent: Saturday, April 11, 2015 12:27 AM
>>> To: Klaij, Christiaan
>>> Cc: petsc-users at mcs.anl.gov<mailto:petsc-users at mcs.anl.gov>
>>> Subject: Re: [petsc-users] monitoring the convergence of fieldsplit 0 and 1
>>>
>>> Chris,
>>>
>>> I have added KSPGetTotalIterations() to the branch barry/add-ksp-total-iterations/master and next. After tests it will go into master
>>>
>>> Barry
>>>
>>>> On Apr 10, 2015, at 8:07 AM, Klaij, Christiaan <C.Klaij at marin.nl<mailto:C.Klaij at marin.nl>> wrote:
>>>>
>>>> Barry,
>>>>
>>>> Sure, I can call PCFieldSplitGetSubKSP() to get the fieldsplit_0
>>>> ksp and then KSPGetIterationNumber, but what does this number
>>>> mean?
>>>>
>>>> It appears to be the number of iterations of the last time that
>>>> the subsystem was solved, right? If so, this corresponds to the
>>>> last iteration of the coupled system, how about all the previous
>>>> iterations?
>>>>
>>>> Chris
>>>> ________________________________________
>>>> From: Barry Smith <bsmith at mcs.anl.gov<mailto:bsmith at mcs.anl.gov>>
>>>> Sent: Friday, April 10, 2015 2:48 PM
>>>> To: Klaij, Christiaan
>>>> Cc: petsc-users at mcs.anl.gov<mailto:petsc-users at mcs.anl.gov>
>>>> Subject: Re: [petsc-users] monitoring the convergence of fieldsplit 0 and 1
>>>>
>>>> Chris,
>>>>
>>>>   It appears you should call PCFieldSplitGetSubKSP() and then get the information you want out of the individual KSPs. If this doesn't work please let us know.
>>>>
>>>> Barry
>>>>
>>>>> On Apr 10, 2015, at 6:48 AM, Klaij, Christiaan <C.Klaij at marin.nl<mailto:C.Klaij at marin.nl>> wrote:
>>>>>
>>>>> A question when using PCFieldSplit: for each linear iteration of
>>>>> the system, how many iterations for fielsplit 0 and 1?
>>>>>
>>>>> One way to find out is to run with -ksp_monitor,
>>>>> -fieldsplit_0_ksp_monitor and -fieldsplit_0_ksp_monitor. This
>>>>> gives the complete convergence history.
>>>>>
>>>>> Another way, suggested by Matt, is to use -ksp_monitor,
>>>>> -fieldsplit_0_ksp_converged_reason and
>>>>> -fieldsplit_1_ksp_converged_reason. This gives only the totals
>>>>> for fieldsplit 0 and 1 (but without saying for which one).
>>>>>
>>>>> Both ways require to somehow process the output, which is a bit
>>>>> inconvenient. Could KSPGetResidualHistory perhaps return (some)
>>>>> information on the subsystems' convergence for processing inside
>>>>> the code?
>>>>>
>>>>> Chris
>>>>>
>>>>>
>>>>> dr. ir. Christiaan Klaij
>>>>> CFD Researcher
>>>>> Research & Development
>>>>> E mailto:C.Klaij at marin.nl<mailto:C.Klaij at marin.nl>
>>>>> T +31 317 49 33 44<tel:%2B31%20317%2049%2033%2044>
>>>>>
>>>>>
>>>>> MARIN
>>>>> 2, Haagsteeg, P.O. Box 28, 6700 AA Wageningen, The Netherlands
>>>>> T +31 317 49 39 11<tel:%2B31%20317%2049%2039%2011>, F +31 317 49 32 45<tel:%2B31%20317%2049%2032%2045>, I www.marin.nl<http://www.marin.nl>
>>>>>
>>>>
>>>
>>
>




--
What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.
-- Norbert Wiener


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20180515/a5cdaad6/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: imagec0b128.PNG
Type: image/png
Size: 293 bytes
Desc: imagec0b128.PNG
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20180515/a5cdaad6/attachment-0004.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: imageb21668.PNG
Type: image/png
Size: 331 bytes
Desc: imageb21668.PNG
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20180515/a5cdaad6/attachment-0005.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: imagec00de8.PNG
Type: image/png
Size: 333 bytes
Desc: imagec00de8.PNG
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20180515/a5cdaad6/attachment-0006.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: imagea2abab.PNG
Type: image/png
Size: 253 bytes
Desc: imagea2abab.PNG
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20180515/a5cdaad6/attachment-0007.png>


More information about the petsc-users mailing list