[petsc-users] Multigrid
Matthew Knepley
knepley at gmail.com
Tue May 1 17:22:56 CDT 2012
On Tue, May 1, 2012 at 6:18 PM, Karthik Duraisamy <dkarthik at stanford.edu>wrote:
> Hello,
>
> Sorry (and thanks for the reply). I've attached the no multigrid case. I
> didn't include it because (at least to the untrained eye, everything looks
> the same).
>
Did you send all the output from the MG case? There must be a PC around it.
By default its GMRES, so there would be
an extra GMRES loop compared to the case without MG.
Matt
> Regards,
> Karthik
>
> KSP Object: 8 MPI processes
> type: gmres
> GMRES: restart=100, using Classical (unmodified) Gram-Schmidt
> Orthogonalization with no iterative refinement
> GMRES: happy breakdown tolerance 1e-30
> maximum iterations=1
> using preconditioner applied to right hand side for initial guess
> tolerances: relative=1e-05, absolute=1e-50, divergence=1e+10
> left preconditioning
> using nonzero initial guess
> using PRECONDITIONED norm type for convergence test
> PC Object: 8 MPI processes
> type: bjacobi
> block Jacobi: number of blocks = 8
> Local solve is same for all blocks, in the following KSP and PC objects:
> KSP Object: (sub_) 1 MPI processes
> type: preonly
> maximum iterations=10000, initial guess is zero
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000
> left preconditioning
> using NONE norm type for convergence test
> PC Object: (sub_) 1 MPI processes
> type: ilu
> ILU: out-of-place factorization
> 0 levels of fill
> tolerance for zero pivot 1e-12
> using diagonal shift to prevent zero pivot
> matrix ordering: natural
> factor fill ratio given 1, needed 1
> Factored matrix follows:
> Matrix Object: 1 MPI processes
> type: seqaij
> rows=9015, cols=9015
> package used to perform factorization: petsc
> total: nonzeros=517777, allocated nonzeros=517777
> total number of mallocs used during MatSetValues calls =0
> using I-node routines: found 3476 nodes, limit used is 5
> linear system matrix = precond matrix:
> Matrix Object: 1 MPI processes
> type: seqaij
> rows=9015, cols=9015
> total: nonzeros=517777, allocated nonzeros=517777
> total number of mallocs used during MatSetValues calls =0
> using I-node routines: found 3476 nodes, limit used is 5
> linear system matrix = precond matrix:
> Matrix Object: 8 MPI processes
> type: mpiaij
> rows=75000, cols=75000
> total: nonzeros=4427800, allocated nonzeros=4427800
> total number of mallocs used during MatSetValues calls =0
> using I-node (on process 0) routines: found 3476 nodes, limit used is
> 5
>
>
> ----- Original Message -----
> From: "Matthew Knepley" <knepley at gmail.com>
> To: "PETSc users list" <petsc-users at mcs.anl.gov>
> Sent: Tuesday, May 1, 2012 3:15:14 PM
> Subject: Re: [petsc-users] Multigrid
>
>
> On Tue, May 1, 2012 at 6:12 PM, Karthik Duraisamy < dkarthik at stanford.edu> wrote:
>
>
>
> Hello Barry,
>
> Thank you for your super quick response. I have attached the output of
> ksp_view and it is practically the same as that when I don't use PCMG. The
> part I don't understand is how PCMG able to function at the zero grid level
> and still produce a much better convergence than when using the default PC.
> Is there any additional smoothing or interpolation going on?
>
>
>
> You only included one output, so I have no way of knowing what you used
> before. However, this is running GMRES/ILU.
>
>
> Also, for Algebraic Multigrid, would you recommend BoomerAMG or ML ?
>
>
>
> They are different algorithms. Its not possible to say generally that one
> is better. Try them both.
>
>
> Matt
>
>
> Best regards,
> Karthik.
>
> type: mg
> MG: type is MULTIPLICATIVE, levels=1 cycles=v
> Cycles per PCApply=1
> Not using Galerkin computed coarse grid matrices
> Coarse grid solver -- level -------------------------------
> KSP Object: (mg_levels_0_) 8 MPI processes
> type: gmres
> GMRES: restart=30, using Classical (unmodified) Gram-Schmidt
> Orthogonalization with no iterative refinement
> GMRES: happy breakdown tolerance 1e-30
> maximum iterations=1, initial guess is zero
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000
> left preconditioning
> using PRECONDITIONED norm type for convergence test
> PC Object: (mg_levels_0_) 8 MPI processes
> type: bjacobi
> block Jacobi: number of blocks = 8
> Local solve is same for all blocks, in the following KSP and PC objects:
> KSP Object: (mg_levels_0_sub_) 1 MPI processes
> type: preonly
> maximum iterations=10000, initial guess is zero
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000
> left preconditioning
> using NONE norm type for convergence test
> PC Object: (mg_levels_0_sub_) 1 MPI processes
> type: ilu
> ILU: out-of-place factorization
> 0 levels of fill
> tolerance for zero pivot 1e-12
> using diagonal shift to prevent zero pivot
> matrix ordering: natural
> factor fill ratio given 1, needed 1
> Factored matrix follows:
> Matrix Object: 1 MPI processes
> type: seqaij
> rows=9015, cols=9015
> package used to perform factorization: petsc
> total: nonzeros=517777, allocated nonzeros=517777
> total number of mallocs used during MatSetValues calls =0
> using I-node routines: found 3476 nodes, limit used is 5
> linear system matrix = precond matrix:
> Matrix Object: 1 MPI processes
> type: seqaij
> rows=9015, cols=9015
> total: nonzeros=517777, allocated nonzeros=517777
> total number of mallocs used during MatSetValues calls =0
> using I-node routines: found 3476 nodes, limit used is 5
> linear system matrix = precond matrix:
> Matrix Object: 8 MPI processes
> type: mpiaij
> rows=75000, cols=75000
> total: nonzeros=4427800, allocated nonzeros=4427800
> total number of mallocs used during MatSetValues calls =0
> using I-node (on process 0) routines: found 3476 nodes, limit used is 5
> linear system matrix = precond matrix:
> Matrix Object: 8 MPI processes
> type: mpiaij
> rows=75000, cols=75000
> total: nonzeros=4427800, allocated nonzeros=4427800
> total number of mallocs used during MatSetValues calls =0
> using I-node (on process 0) routines: found 3476 nodes, limit used is 5
>
>
>
> ----- Original Message -----
> From: "Barry Smith" < bsmith at mcs.anl.gov >
> To: "PETSc users list" < petsc-users at mcs.anl.gov >
> Sent: Tuesday, May 1, 2012 1:39:26 PM
> Subject: Re: [petsc-users] Multigrid
>
>
> On May 1, 2012, at 3:37 PM, Karthik Duraisamy wrote:
>
> > Hello,
> >
> > I have been using PETSc for a couple of years with good success, but
> lately as my linear problems have become stiffer (condition numbers of the
> order of 1.e20), I am looking to use better preconditioners. I tried using
> PCMG with all the default options (i.e., I just specified my preconditioner
> as PCMG and did not add any options to it) and I am immediately seeing
> better convergence.
> >
> > What I am not sure of is why? I would like to know more about the
> default parameters (the manual is not very explicit) and more importantly,
> want to know why it is working even when I haven't specified any grid
> levels and coarse grid operators. Any
> > help in this regard will be appreciated.
>
> First run with -ksp_view to see what solver it is actually using.
>
> Barry
>
> >
> > Also, ultimately I want to use algebraic multigrid so is PCML a better
> option than BoomerAMG? I tried BoomerAMG with mixed results.
> >
> > Thanks,
> > Karthik
> >
> >
> >
> > --
> >
> > =======================================
> > Karthik Duraisamy
> > Assistant Professor (Consulting)
> > Durand Building Rm 357
> > Dept of Aeronautics and Astronautics
> > Stanford University
> > Stanford CA 94305
> >
> > Phone: 650-721-2835
> > Web: www.stanford.edu/~dkarthik
> > =======================================
>
>
>
>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
--
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120501/7fa99aba/attachment.htm>
More information about the petsc-users
mailing list