[petsc-users] BJACOBI with FIELDSPLIT

Rossi, Simone srossi at email.unc.edu
Mon Mar 18 15:17:27 CDT 2019


Got it. I could use -pc_pc_side to get the same with gmres: now it makes sense.
Thanks
Simone
________________________________
From: Matthew Knepley <knepley at gmail.com>
Sent: Monday, March 18, 2019 3:58:39 PM
To: Rossi, Simone
Cc: Justin Chang; petsc-users at mcs.anl.gov
Subject: Re: [petsc-users] BJACOBI with FIELDSPLIT

On Mon, Mar 18, 2019 at 3:56 PM Rossi, Simone via petsc-users <petsc-users at mcs.anl.gov<mailto:petsc-users at mcs.anl.gov>> wrote:

To follow up on that: when would you want to use gmres instead of fgmres in the outer ksp?

The difference here is just that FGMRES is right-preconditioned by default, so you do not get the extra application. I think
if you use the regular monitor, -ksp_monitor, you will not see 2 applications.

   Matt


Thanks again for the help,

Simone

________________________________
From: Rossi, Simone
Sent: Monday, March 18, 2019 3:43:04 PM
To: Justin Chang
Cc: Smith, Barry F.; petsc-users at mcs.anl.gov<mailto:petsc-users at mcs.anl.gov>
Subject: Re: [petsc-users] BJACOBI with FIELDSPLIT


Thanks, using fgmres it does work as expected.

I thought gmres would do the same since I'm solving the subblocks "exactly".


Simone


________________________________
From: Justin Chang <jychang48 at gmail.com<mailto:jychang48 at gmail.com>>
Sent: Monday, March 18, 2019 3:38:34 PM
To: Rossi, Simone
Cc: Smith, Barry F.; petsc-users at mcs.anl.gov<mailto:petsc-users at mcs.anl.gov>
Subject: Re: [petsc-users] BJACOBI with FIELDSPLIT

Use -ksp_type fgmres if your inner ksp solvers are gmres. Maybe that will help?

On Mon, Mar 18, 2019 at 1:33 PM Rossi, Simone via petsc-users <petsc-users at mcs.anl.gov<mailto:petsc-users at mcs.anl.gov>> wrote:

Thanks Barry.

Let me know if you can spot anything out of the ksp_view


KSP Object: 1 MPI processes

  type: gmres

    restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement

    happy breakdown tolerance 1e-30

  maximum iterations=5000, nonzero initial guess

  tolerances:  relative=0.001, absolute=1e-50, divergence=10000.

  left preconditioning

  using PRECONDITIONED norm type for convergence test

PC Object: 1 MPI processes

  type: fieldsplit

    FieldSplit with MULTIPLICATIVE composition: total splits = 2, blocksize = 2

    Solver info for each split is in the following KSP objects:

    Split number 0 Fields  0

    KSP Object: (fieldsplit_0_) 1 MPI processes

      type: gmres

        restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement

        happy breakdown tolerance 1e-30

      maximum iterations=10000, initial guess is zero

      tolerances:  relative=1e-12, absolute=1e-50, divergence=10000.

      left preconditioning

      using PRECONDITIONED norm type for convergence test

    PC Object: (fieldsplit_0_) 1 MPI processes

      type: hypre

        HYPRE BoomerAMG preconditioning

          Cycle type V

          Maximum number of levels 25

          Maximum number of iterations PER hypre call 1

          Convergence tolerance PER hypre call 0.

          Threshold for strong coupling 0.25

          Interpolation truncation factor 0.

          Interpolation: max elements per row 0

          Number of levels of aggressive coarsening 0

          Number of paths for aggressive coarsening 1

          Maximum row sums 0.9

          Sweeps down         1

          Sweeps up           1

          Sweeps on coarse    1

          Relax down          symmetric-SOR/Jacobi

          Relax up            symmetric-SOR/Jacobi

          Relax on coarse     Gaussian-elimination

          Relax weight  (all)      1.

          Outer relax weight (all) 1.

          Using CF-relaxation

          Not using more complex smoothers.

          Measure type        local

          Coarsen type        Falgout

          Interpolation type  classical

      linear system matrix = precond matrix:

      Mat Object: (fieldsplit_0_) 1 MPI processes

        type: seqaij

        rows=35937, cols=35937

        total: nonzeros=912673, allocated nonzeros=912673

        total number of mallocs used during MatSetValues calls =0

          not using I-node routines

    Split number 1 Fields  1

    KSP Object: (fieldsplit_1_) 1 MPI processes

      type: gmres

        restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement

        happy breakdown tolerance 1e-30

      maximum iterations=10000, initial guess is zero

      tolerances:  relative=1e-12, absolute=1e-50, divergence=10000.

      left preconditioning

      using PRECONDITIONED norm type for convergence test

    PC Object: (fieldsplit_1_) 1 MPI processes

      type: hypre

        HYPRE BoomerAMG preconditioning

          Cycle type V

          Maximum number of levels 25

          Maximum number of iterations PER hypre call 1

          Convergence tolerance PER hypre call 0.

          Threshold for strong coupling 0.25

          Interpolation truncation factor 0.

          Interpolation: max elements per row 0

          Number of levels of aggressive coarsening 0

          Number of paths for aggressive coarsening 1

          Maximum row sums 0.9

          Sweeps down         1

          Sweeps up           1

          Sweeps on coarse    1

          Relax down          symmetric-SOR/Jacobi

          Relax up            symmetric-SOR/Jacobi

          Relax on coarse     Gaussian-elimination

          Relax weight  (all)      1.

          Outer relax weight (all) 1.

          Using CF-relaxation

          Not using more complex smoothers.

          Measure type        local

          Coarsen type        Falgout

          Interpolation type  classical

      linear system matrix = precond matrix:

      Mat Object: (fieldsplit_1_) 1 MPI processes

        type: seqaij

        rows=35937, cols=35937

        total: nonzeros=912673, allocated nonzeros=912673

        total number of mallocs used during MatSetValues calls =0

          not using I-node routines

  linear system matrix = precond matrix:

  Mat Object: () 1 MPI processes

    type: seqaij

    rows=71874, cols=71874

    total: nonzeros=3650692, allocated nonzeros=3650692

    total number of mallocs used during MatSetValues calls =0

      using I-node routines: found 35937 nodes, limit used is 5


________________________________
From: Smith, Barry F. <bsmith at mcs.anl.gov<mailto:bsmith at mcs.anl.gov>>
Sent: Monday, March 18, 2019 3:27:13 PM
To: Rossi, Simone
Cc: petsc-users at mcs.anl.gov<mailto:petsc-users at mcs.anl.gov>
Subject: Re: [petsc-users] BJACOBI with FIELDSPLIT


 Simone,

    This is indeed surprising, given the block structure of the matrix and the exact block solves we'd expect the solver to converge after the application of the preconditioner. Please send the output of -ksp_view

   Barry

Also if you are willing to share your test code we can try running it to determine why it doesn't converge immediately.


> On Mar 18, 2019, at 2:14 PM, Rossi, Simone via petsc-users <petsc-users at mcs.anl.gov<mailto:petsc-users at mcs.anl.gov>> wrote:
>
> Dear all,
> I'm debugging my application in which I'm trying to use the FIELDSPLIT preconditioner for solving a 2x2 block matrix.
>
> Currently I'm testing the preconditioner on a decoupled system where I solve two identical and independent Poisson problems. Using the default fieldsplit type (multiplicative), I'm expecting the method to be equivalent to a Block Jacobi solver.
> Setting
> -ksp_rtol 1e-6
> while  using gmres/hypre on each subblock with
> -fieldsplit_0_ksp_rtol 1e-12
> -fieldsplit_1_ksp_rtol 1e-12
> I'm expecting to converge in 1 iteration with a single solve for each block.
>
> Asking to output the iteration count for the subblocks with
> -ksp_converged_reason
> -fieldsplit_0_ksp_converged_reason
> -fieldsplit_1_ksp_converged_reason
> revealed that the outer solver converges in 1 iteration, but each block is solved for 3 times.
> This is the output I get:
>
>   Linear fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 7
>   Linear fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 7
>   0 KSP preconditioned resid norm 9.334948012657e+01 true resid norm 1.280164130222e+02 ||r(i)||/||b|| 1.000000000000e+00
>
>   Linear fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 7
>   Linear fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 7
>   Linear fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 7
>   Linear fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 7
>   1 KSP preconditioned resid norm 1.518151977611e-11 true resid norm 8.123270435936e-12 ||r(i)||/||b|| 6.345491366429e-14
>
> Linear solve converged due to CONVERGED_RTOL iterations 1
>
>
> Are the subblocks actually solved for multiple times at every outer iteration?
>
> Thanks for the help,
>
> Simone



--
What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/<http://www.cse.buffalo.edu/~knepley/>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20190318/b5a224aa/attachment.html>


More information about the petsc-users mailing list