[petsc-users] Multigrid

Karthik Duraisamy dkarthik at stanford.edu
Thu May 3 00:10:36 CDT 2012


oops.. I am sorry.. I attached the wrong output (I need some sleep). This is what I got from your suggestion and it worked well.

--------------------------------------
KSP Object: 8 MPI processes
  type: fgmres
    GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement
    GMRES: happy breakdown tolerance 1e-30
  maximum iterations=1, initial guess is zero
  tolerances:  relative=1e-05, absolute=1e-50, divergence=1e+10
  right preconditioning
  using UNPRECONDITIONED norm type for convergence test
PC Object: 8 MPI processes
  type: ml
    MG: type is MULTIPLICATIVE, levels=5 cycles=v
      Cycles per PCApply=1
      Using Galerkin computed coarse grid matrices
  Coarse grid solver -- level -------------------------------
    KSP Object:    (mg_coarse_)     8 MPI processes    
      type: preonly
      maximum iterations=1, initial guess is zero
      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
      left preconditioning
      using NONE norm type for convergence test
    PC Object:    (mg_coarse_)     8 MPI processes    
      type: redundant
        Redundant preconditioner: First (color=0) of 8 PCs follows
      KSP Object:      (mg_coarse_redundant_)       1 MPI processes      
        type: preonly
        maximum iterations=10000, initial guess is zero
        tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
        left preconditioning
        using NONE norm type for convergence test
      PC Object:      (mg_coarse_redundant_)       1 MPI processes      
        type: lu
          LU: out-of-place factorization
          tolerance for zero pivot 1e-12
          matrix ordering: nd
          factor fill ratio given 5, needed 1
            Factored matrix follows:
              Matrix Object:               1 MPI processes              
                type: seqaij
                rows=8, cols=8
                package used to perform factorization: petsc
                total: nonzeros=64, allocated nonzeros=64
                total number of mallocs used during MatSetValues calls =0
                  using I-node routines: found 2 nodes, limit used is 5
        linear system matrix = precond matrix:
        Matrix Object:         1 MPI processes        
          type: seqaij
          rows=8, cols=8
          total: nonzeros=64, allocated nonzeros=64
          total number of mallocs used during MatSetValues calls =0
            using I-node routines: found 2 nodes, limit used is 5
      linear system matrix = precond matrix:
      Matrix Object:       8 MPI processes      
        type: mpiaij
        rows=8, cols=8
        total: nonzeros=64, allocated nonzeros=64
        total number of mallocs used during MatSetValues calls =0
          not using I-node (on process 0) routines
  Down solver (pre-smoother) on level 1 -------------------------------
    KSP Object:    (mg_levels_1_)     8 MPI processes    
      type: richardson
        Richardson: damping factor=1
      maximum iterations=1
      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
      left preconditioning
      using nonzero initial guess
      using PRECONDITIONED norm type for convergence test
    PC Object:    (mg_levels_1_)     8 MPI processes    
      type: sor
        SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1
      linear system matrix = precond matrix:
      Matrix Object:       8 MPI processes      
        type: mpiaij
        rows=9, cols=9
        total: nonzeros=77, allocated nonzeros=77
        total number of mallocs used during MatSetValues calls =0
          not using I-node (on process 0) routines
  Up solver (post-smoother) same as down solver (pre-smoother)
  Down solver (pre-smoother) on level 2 -------------------------------
    KSP Object:    (mg_levels_2_)     8 MPI processes    
      type: richardson
        Richardson: damping factor=1
      maximum iterations=1
      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
      left preconditioning
      using nonzero initial guess
      using PRECONDITIONED norm type for convergence test
    PC Object:    (mg_levels_2_)     8 MPI processes    
      type: sor
        SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1
      linear system matrix = precond matrix:
      Matrix Object:       8 MPI processes      
        type: mpiaij
        rows=98, cols=98
        total: nonzeros=1589, allocated nonzeros=1589
        total number of mallocs used during MatSetValues calls =0
          not using I-node (on process 0) routines
  Up solver (post-smoother) same as down solver (pre-smoother)
  Down solver (pre-smoother) on level 3 -------------------------------
    KSP Object:    (mg_levels_3_)     8 MPI processes    
      type: richardson
        Richardson: damping factor=1
      maximum iterations=1
      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
      left preconditioning
      using nonzero initial guess
      using PRECONDITIONED norm type for convergence test
    PC Object:    (mg_levels_3_)     8 MPI processes    
      type: sor
        SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1
      linear system matrix = precond matrix:
      Matrix Object:       8 MPI processes      
        type: mpiaij
        rows=1362, cols=1362
        total: nonzeros=21940, allocated nonzeros=21940
        total number of mallocs used during MatSetValues calls =0
          not using I-node (on process 0) routines
  Up solver (post-smoother) same as down solver (pre-smoother)
  Down solver (pre-smoother) on level 4 -------------------------------
    KSP Object:    (mg_levels_4_)     8 MPI processes    
      type: richardson
        Richardson: damping factor=1
      maximum iterations=1
      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
      left preconditioning
      using nonzero initial guess
      using PRECONDITIONED norm type for convergence test
    PC Object:    (mg_levels_4_)     8 MPI processes    
      type: sor
        SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1
      linear system matrix = precond matrix:
      Matrix Object:       8 MPI processes      
        type: mpiaij
        rows=75000, cols=75000
        total: nonzeros=4427800, allocated nonzeros=4427800
        total number of mallocs used during MatSetValues calls =0
          using I-node (on process 0) routines: found 3476 nodes, limit used is 5
  Up solver (post-smoother) same as down solver (pre-smoother)
  linear system matrix = precond matrix:
  Matrix Object:   8 MPI processes  
    type: mpiaij
    rows=75000, cols=75000
    total: nonzeros=4427800, allocated nonzeros=4427800
    total number of mallocs used during MatSetValues calls =0
      using I-node (on process 0) routines: found 3476 nodes, limit used is 5
ITER, DTMIN, CFL = 9 0.459314 10


----- Original Message -----
From: "Barry Smith" <bsmith at mcs.anl.gov>
To: "PETSc users list" <petsc-users at mcs.anl.gov>
Sent: Wednesday, May 2, 2012 10:04:07 PM
Subject: Re: [petsc-users] Multigrid


On May 3, 2012, at 12:01 AM, Karthik Duraisamy wrote:

> Sorry.. I forgot to add that I tried just those options 
> 
> -pc_type ml -ksp_type fgmres   -ksp_max_it 100 -ksp_view 
> 
> to get the result that I got.

    The output you sent below was NOT obtained with these options. 

>> type: gmres
>> type: bjacobi

   Are you sure you have KSPSetFromOptions() in your code?

  Barry


> I was asking for additional options. In any case, now I can try a few things
> 
> ----- Original Message -----
> From: "Barry Smith" <bsmith at mcs.anl.gov>
> To: "PETSc users list" <petsc-users at mcs.anl.gov>
> Sent: Wednesday, May 2, 2012 9:56:05 PM
> Subject: Re: [petsc-users] Multigrid
> 
> 
>   Make the run I suggested -pc_type ml -ksp_type fgmres   -ksp_max_it 100 -ksp_view   
> 
>  Barry
> 
> On May 2, 2012, at 10:21 PM, Karthik Duraisamy wrote:
> 
>> Fair enough. I wiped out all references to petsc in my code and started from scratch and used just the options you mentioned.. and I get good convergence as first reported in the PCMG reference! I have attached the output. Now that it is behaving, could you recommend some options to exercise the multigrid?
>> 
>> Thanks a lot for your time (and patience!)
>> 
>> ----------------------
>> 
>> KSP Object: 8 MPI processes
>> type: gmres
>>   GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement
>>   GMRES: happy breakdown tolerance 1e-30
>> maximum iterations=1, initial guess is zero
>> tolerances:  relative=1e-05, absolute=1e-50, divergence=1e+10
>> left preconditioning
>> using PRECONDITIONED norm type for convergence test
>> PC Object: 8 MPI processes
>> type: bjacobi
>>   block Jacobi: number of blocks = 8
>>   Local solve is same for all blocks, in the following KSP and PC objects:
>> KSP Object:  (sub_)   1 MPI processes  
>>   type: preonly
>>   maximum iterations=10000, initial guess is zero
>>   tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
>>   left preconditioning
>>   using NONE norm type for convergence test
>> PC Object:  (sub_)   1 MPI processes  
>>   type: ilu
>>     ILU: out-of-place factorization
>>     0 levels of fill
>>     tolerance for zero pivot 1e-12
>>     using diagonal shift to prevent zero pivot
>>     matrix ordering: natural
>>     factor fill ratio given 1, needed 1
>>       Factored matrix follows:
>>         Matrix Object:           1 MPI processes          
>>           type: seqaij
>>           rows=9015, cols=9015
>>           package used to perform factorization: petsc
>>           total: nonzeros=517777, allocated nonzeros=517777
>>           total number of mallocs used during MatSetValues calls =0
>>             using I-node routines: found 3476 nodes, limit used is 5
>>   linear system matrix = precond matrix:
>>   Matrix Object:     1 MPI processes    
>>     type: seqaij
>>     rows=9015, cols=9015
>>     total: nonzeros=517777, allocated nonzeros=517777
>>     total number of mallocs used during MatSetValues calls =0
>>       using I-node routines: found 3476 nodes, limit used is 5
>> linear system matrix = precond matrix:
>> Matrix Object:   8 MPI processes  
>>   type: mpiaij
>>   rows=75000, cols=75000
>>   total: nonzeros=4427800, allocated nonzeros=4427800
>>   total number of mallocs used during MatSetValues calls =0
>>     using I-node (on process 0) routines: found 3476 nodes, limit used is 5
>> 
>> 
>> 
>> 
>> ----- Original Message -----
>> From: "Barry Smith" <bsmith at mcs.anl.gov>
>> To: "PETSc users list" <petsc-users at mcs.anl.gov>
>> Sent: Wednesday, May 2, 2012 10:34:17 AM
>> Subject: Re: [petsc-users] Multigrid
>> 
>> 
>>  Run with -pc_type ml -ksp_type fgmres   -ksp_max_it 100 -ksp_view   and NOTHING else. Don'ts set any KSP or PC options in the code!  Then send us ALL the output.
>> 
>>   Barry
>> 
>> I am getting all confused with "I ran with these options and this happened, I ran with these options and this happened, I ran with these options and this happened". I am getting information overload with too much information and yet not enough information.
>> 
>> 
>> On May 2, 2012, at 12:27 PM, Karthik Duraisamy wrote:
>> 
>>> Hello,
>>> 
>>>  With PCML, I tried 
>>> 
>>>  -mg_coarse_pc_type  redundant ; -mg_coarse_ksp_type preonly ;   KSPSetType(ksp, KSPGMRES);
>>> 
>>> and the residual still diverges after a while. Strangely, if I use PCMG (with 0 levels), it works OK, but not with PCML.
>>> 
>>> This is definitely strange. Here is the entire piece of relevant code.
>>> 
>>> When initializing Petsc, I do the following: (let's call this the first bit and I do this only once) 
>>> 
>>>  KSPSetOperators(ksp, A_, A_, SAME_NONZERO_PATTERN);
>>>  KSPSetInitialGuessKnoll(ksp, PETSC_TRUE);
>>>  KSPSetType(ksp, KSPFGMRES);
>>>  KSPGMRESSetRestart(ksp, 100);
>>>  KSPSetFromOptions(ksp);
>>> 
>>> 
>>> Then for each outer solver iteration: (let's call this the second bit and I do this every outer iteration)
>>> 
>>>  PC pc;
>>>  KSPGetPC(ksp,&pc);
>>>  PCSetType(pc, PCMG);
>>>  KSPSetOperators(ksp, A_, A_, SAME_NONZERO_PATTERN);
>>>  PetscOptionsSetValue("-ksp_initial_guess_nonzero", "true");
>>>  KSPSetType(ksp, KSPGMRES);
>>>  KSPGMRESSetRestart(ksp, 100);
>>>  KSPSetFromOptions(ksp);
>>> 
>>> 
>>> I have tried various combinations of preconditioners and smoothers (including fgmres, bcgsl, etc) but the only combination that works is the above. Any of the PCML options that I have tried failed. 
>>> 
>>> Notes:
>>> - If I keep both of the above bits, it converges very well
>>> 
>>> - If I keep both of the above bits, but replace PCMG with PCML, it diverges quickly
>>> 
>>> - If I removed the first bit and kept the second bit, the residuals diverge even with PCMG 
>>> 
>>> - If I keep the first bit and remove the second bit, the residuals diverge eventually, but at a slow rate
>>> 
>>> 
>>> I am very confident with the rest of the code as I've been using PETSc (with just the first bit) for 3 years on 100s of problems.
>>> 
>>> Regards,
>>> Karthik
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> ----- Original Message -----
>>> From: "Barry Smith" <bsmith at mcs.anl.gov>
>>> To: "PETSc users list" <petsc-users at mcs.anl.gov>
>>> Sent: Wednesday, May 2, 2012 6:02:06 AM
>>> Subject: Re: [petsc-users] Multigrid
>>> 
>>> 
>>> This is likely helping a great deal:   
>>> 
>>> KSPGMRESSetRestart(ksp, 100);
>>> 
>>> You can try this with ml also
>>> 
>>> On May 2, 2012, at 1:33 AM, Karthik Duraisamy wrote:
>>> 
>>>> Hello again,
>>>> 
>>>>   I played with BoomerAMG and ML (for the 2D problem with a reasonable condition number), and so far, they haven't worked (the residual eventually diverges). The options that I am using are (for instance)
>>>> 
>>>> PC pc; 
>>>> KSPGetPC(ksp,&pc);
>>>> PetscOptionsSetValue("-ksp_type", "gmres");
>>> 
>>> Here you NEED  fgmres since the smoother is gmres by default. You can see this with -ksp_view
>>> 
>>>> KSPSetOperators(ksp, A_, A_, SAME_NONZERO_PATTERN);
>>>> PetscOptionsSetValue("-ksp_initial_guess_nonzero", "true");
>>>> PCSetType(pc, PCML);
>>>> PetscOptionsSetValue("-pc_ml_maxNlevels", "3");
>>>> PetscOptionsSetValue("-mg_coarse_ksp_type","richardson");
>>>> PetscOptionsSetValue("-mg_coarse_pc_type","sor");
>>>> PetscOptionsSetValue("-mg_coarse_pc_sor_its","8");
>>> 
>>> Just use -mg_coarse_pc_type  redundant -mg_coarse_ksp_type preonly instead of the above.
>>> 
>>> 
>>> Barry
>>> 
>>>> PetscOptionsSetValue("-ksp_monitor","1");
>>>> KSPSetFromOptions(ksp);
>>> 
>>> 
>>>> 
>>>> I have replaced KSP with bcgsl, fgmres, etc.
>>>> 
>>>> As I mentioned earlier, the following works like a charm:
>>>> 
>>>> PC pc; 
>>>> KSPGetPC(ksp,&pc);
>>>> PCSetType(pc, PCMG);
>>>> KSPSetOperators(ksp, A_, A_, SAME_NONZERO_PATTERN);
>>>> PetscOptionsSetValue("-ksp_initial_guess_nonzero", "true");
>>>> KSPSetType(ksp, KSPGMRES);
>>>> KSPGMRESSetRestart(ksp, 100);
>>>> KSPSetFromOptions(ksp);
>>>> 
>>>> 
>>>> Regards,
>>>> Karthik.
>>>> 
>>>> 
>>>> 
>>>> ----- Original Message -----
>>>> From: "Barry Smith" <bsmith at mcs.anl.gov>
>>>> To: "PETSc users list" <petsc-users at mcs.anl.gov>
>>>> Sent: Tuesday, May 1, 2012 9:14:05 PM
>>>> Subject: Re: [petsc-users] Multigrid
>>>> 
>>>> 
>>>> On May 1, 2012, at 6:00 PM, Karthik Duraisamy wrote:
>>>> 
>>>>> So as I understand it, GMRES is used as a preconditioner and as a solver when I use PCMG with defaults. If this is the case, I should be able to recreate this set up without the PCMG. Any pointers as to how this can be done?
>>>>> 
>>>>> Also, yes indeed, my mesh is completely unstructured, so I will have to use ml or boomeramg. 
>>>>> 
>>>>> The problems that I am attempting involve RANS of compressible turbulent combustion (finite volume, steady). The high condition numbers are because of the extreme grid stretching and stiff source terms (in the turbulence and combustion model).
>>>> 
>>>> With a condition number like that I wouldn't even consider using double precision, I think you would just be computing meaningless noise. With petsc-dev you can use quad precision ./configure --with-precision=__float128 using recent GNU compilers
>>>> (no we haven't tested the gfortran sides of things). 
>>>> 
>>>> Barry
>>>> 
>>>> 
>>>>> I have been trying a reduced problem in these 2D test cases, in which the condition number is only 1e7.
>>>>> 
>>>>> Thanks,
>>>>> Karthik.
>>>>> 
>>>>> 
>>>>> ----- Original Message -----
>>>>> From: "Mark F. Adams" <mark.adams at columbia.edu>
>>>>> To: "PETSc users list" <petsc-users at mcs.anl.gov>
>>>>> Sent: Tuesday, May 1, 2012 3:50:31 PM
>>>>> Subject: Re: [petsc-users] Multigrid
>>>>> 
>>>>> 
>>>>> Also, note that PCMG can not create coarse grid spaces for an (MPI)AIJ matrix. If you use regular grids (DA?) then PETSc can construct geometric multigrid coarse grid spaces, although I don't know if PCMG will construct these for you (I don't think it will and I can see from your output that PCMG just used one grid). 'ml', hypre' and 'gamg' (a native AMG solver) will do real AMG solvers for you. All three can work on a similar class of problems. 
>>>>> 
>>>>> 
>>>>> Also, you mention that you have a condition number of 1.e20. That is astronomical for such a small problem. How did you compute that number? Do you know where the ill-conditioning comes from? Is this an elliptic operator? 
>>>>> 
>>>>> 
>>>>> Mark 
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> On May 1, 2012, at 6:31 PM, Matthew Knepley wrote: 
>>>>> 
>>>>> 
>>>>> 
>>>>> On Tue, May 1, 2012 at 6:27 PM, Karthik Duraisamy < dkarthik at stanford.edu > wrote: 
>>>>> 
>>>>> 
>>>>> 
>>>>> The following was output for the very first iteration whereas what I had attached earlier was output every iteration. I am still a bit perplexed because PCMG drops the residual like a rock (after the first few iterations whereas with no PCMG, it is very slow) 
>>>>> 
>>>>> 
>>>>> 
>>>>> Because the smoother IS the solver you were using before. Just like I said last time, what you are doing is 
>>>>> wrapping up the same solver you used before, sticking it in another GMRES loop, and only looking at the 
>>>>> outer loop. This has nothing to do with MG. 
>>>>> 
>>>>> 
>>>>> Matt 
>>>>> 
>>>>> 
>>>>> KSP Object: 8 MPI processes 
>>>>> type: gmres 
>>>>> GMRES: restart=100, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement 
>>>>> GMRES: happy breakdown tolerance 1e-30 
>>>>> maximum iterations=1, initial guess is zero 
>>>>> using preconditioner applied to right hand side for initial guess 
>>>>> tolerances: relative=0.01, absolute=1e-08, divergence=1e+10 
>>>>> left preconditioning 
>>>>> using DEFAULT norm type for convergence test 
>>>>> PC Object: 8 MPI processes 
>>>>> type: mg 
>>>>> MG: type is MULTIPLICATIVE, levels=1 cycles=v 
>>>>> Cycles per PCApply=1 
>>>>> Not using Galerkin computed coarse grid matrices 
>>>>> Coarse grid solver -- level ------------------------------- 
>>>>> KSP Object: (mg_levels_0_) 8 MPI processes 
>>>>> type not yet set 
>>>>> maximum iterations=1, initial guess is zero 
>>>>> tolerances: relative=1e-05, absolute=1e-50, divergence=10000 
>>>>> left preconditioning 
>>>>> using DEFAULT norm type for convergence test 
>>>>> PC Object: (mg_levels_0_) 8 MPI processes 
>>>>> type not yet set 
>>>>> linear system matrix = precond matrix: 
>>>>> Matrix Object: 8 MPI processes 
>>>>> type: mpiaij 
>>>>> rows=75000, cols=75000 
>>>>> total: nonzeros=4427800, allocated nonzeros=4427800 
>>>>> total number of mallocs used during MatSetValues calls =0 
>>>>> using I-node (on process 0) routines: found 3476 nodes, limit used is 5 
>>>>> 
>>>>> 
>>>>> ----- Original Message ----- 
>>>>> From: "Matthew Knepley" < knepley at gmail.com > 
>>>>> To: "PETSc users list" < petsc-users at mcs.anl.gov > 
>>>>> Sent: Tuesday, May 1, 2012 3:22:56 PM 
>>>>> Subject: Re: [petsc-users] Multigrid 
>>>>> 
>>>>> 
>>>>> On Tue, May 1, 2012 at 6:18 PM, Karthik Duraisamy < dkarthik at stanford.edu > wrote: 
>>>>> 
>>>>> 
>>>>> 
>>>>> Hello, 
>>>>> 
>>>>> Sorry (and thanks for the reply). I've attached the no multigrid case. I didn't include it because (at least to the untrained eye, everything looks the same). 
>>>>> 
>>>>> 
>>>>> 
>>>>> Did you send all the output from the MG case? There must be a PC around it. By default its GMRES, so there would be 
>>>>> an extra GMRES loop compared to the case without MG. 
>>>>> 
>>>>> 
>>>>> Matt 
>>>>> 
>>>>> 
>>>>> Regards, 
>>>>> Karthik 
>>>>> 
>>>>> KSP Object: 8 MPI processes 
>>>>> type: gmres 
>>>>> GMRES: restart=100, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement 
>>>>> GMRES: happy breakdown tolerance 1e-30 
>>>>> maximum iterations=1 
>>>>> using preconditioner applied to right hand side for initial guess 
>>>>> tolerances: relative=1e-05, absolute=1e-50, divergence=1e+10 
>>>>> left preconditioning 
>>>>> using nonzero initial guess 
>>>>> using PRECONDITIONED norm type for convergence test 
>>>>> PC Object: 8 MPI processes 
>>>>> type: bjacobi 
>>>>> block Jacobi: number of blocks = 8 
>>>>> Local solve is same for all blocks, in the following KSP and PC objects: 
>>>>> KSP Object: (sub_) 1 MPI processes 
>>>>> type: preonly 
>>>>> maximum iterations=10000, initial guess is zero 
>>>>> tolerances: relative=1e-05, absolute=1e-50, divergence=10000 
>>>>> left preconditioning 
>>>>> using NONE norm type for convergence test 
>>>>> PC Object: (sub_) 1 MPI processes 
>>>>> type: ilu 
>>>>> ILU: out-of-place factorization 
>>>>> 0 levels of fill 
>>>>> tolerance for zero pivot 1e-12 
>>>>> using diagonal shift to prevent zero pivot 
>>>>> matrix ordering: natural 
>>>>> factor fill ratio given 1, needed 1 
>>>>> Factored matrix follows: 
>>>>> Matrix Object: 1 MPI processes 
>>>>> type: seqaij 
>>>>> rows=9015, cols=9015 
>>>>> package used to perform factorization: petsc 
>>>>> total: nonzeros=517777, allocated nonzeros=517777 
>>>>> total number of mallocs used during MatSetValues calls =0 
>>>>> using I-node routines: found 3476 nodes, limit used is 5 
>>>>> linear system matrix = precond matrix: 
>>>>> Matrix Object: 1 MPI processes 
>>>>> type: seqaij 
>>>>> rows=9015, cols=9015 
>>>>> total: nonzeros=517777, allocated nonzeros=517777 
>>>>> total number of mallocs used during MatSetValues calls =0 
>>>>> using I-node routines: found 3476 nodes, limit used is 5 
>>>>> linear system matrix = precond matrix: 
>>>>> Matrix Object: 8 MPI processes 
>>>>> type: mpiaij 
>>>>> rows=75000, cols=75000 
>>>>> total: nonzeros=4427800, allocated nonzeros=4427800 
>>>>> total number of mallocs used during MatSetValues calls =0 
>>>>> using I-node (on process 0) routines: found 3476 nodes, limit used is 5 
>>>>> 
>>>>> 
>>>>> ----- Original Message ----- 
>>>>> From: "Matthew Knepley" < knepley at gmail.com > 
>>>>> To: "PETSc users list" < petsc-users at mcs.anl.gov > 
>>>>> Sent: Tuesday, May 1, 2012 3:15:14 PM 
>>>>> Subject: Re: [petsc-users] Multigrid 
>>>>> 
>>>>> 
>>>>> On Tue, May 1, 2012 at 6:12 PM, Karthik Duraisamy < dkarthik at stanford.edu > wrote: 
>>>>> 
>>>>> 
>>>>> 
>>>>> Hello Barry, 
>>>>> 
>>>>> Thank you for your super quick response. I have attached the output of ksp_view and it is practically the same as that when I don't use PCMG. The part I don't understand is how PCMG able to function at the zero grid level and still produce a much better convergence than when using the default PC. Is there any additional smoothing or interpolation going on? 
>>>>> 
>>>>> 
>>>>> 
>>>>> You only included one output, so I have no way of knowing what you used before. However, this is running GMRES/ILU. 
>>>>> 
>>>>> 
>>>>> Also, for Algebraic Multigrid, would you recommend BoomerAMG or ML ? 
>>>>> 
>>>>> 
>>>>> 
>>>>> They are different algorithms. Its not possible to say generally that one is better. Try them both. 
>>>>> 
>>>>> 
>>>>> Matt 
>>>>> 
>>>>> 
>>>>> Best regards, 
>>>>> Karthik. 
>>>>> 
>>>>> type: mg 
>>>>> MG: type is MULTIPLICATIVE, levels=1 cycles=v 
>>>>> Cycles per PCApply=1 
>>>>> Not using Galerkin computed coarse grid matrices 
>>>>> Coarse grid solver -- level ------------------------------- 
>>>>> KSP Object: (mg_levels_0_) 8 MPI processes 
>>>>> type: gmres 
>>>>> GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement 
>>>>> GMRES: happy breakdown tolerance 1e-30 
>>>>> maximum iterations=1, initial guess is zero 
>>>>> tolerances: relative=1e-05, absolute=1e-50, divergence=10000 
>>>>> left preconditioning 
>>>>> using PRECONDITIONED norm type for convergence test 
>>>>> PC Object: (mg_levels_0_) 8 MPI processes 
>>>>> type: bjacobi 
>>>>> block Jacobi: number of blocks = 8 
>>>>> Local solve is same for all blocks, in the following KSP and PC objects: 
>>>>> KSP Object: (mg_levels_0_sub_) 1 MPI processes 
>>>>> type: preonly 
>>>>> maximum iterations=10000, initial guess is zero 
>>>>> tolerances: relative=1e-05, absolute=1e-50, divergence=10000 
>>>>> left preconditioning 
>>>>> using NONE norm type for convergence test 
>>>>> PC Object: (mg_levels_0_sub_) 1 MPI processes 
>>>>> type: ilu 
>>>>> ILU: out-of-place factorization 
>>>>> 0 levels of fill 
>>>>> tolerance for zero pivot 1e-12 
>>>>> using diagonal shift to prevent zero pivot 
>>>>> matrix ordering: natural 
>>>>> factor fill ratio given 1, needed 1 
>>>>> Factored matrix follows: 
>>>>> Matrix Object: 1 MPI processes 
>>>>> type: seqaij 
>>>>> rows=9015, cols=9015 
>>>>> package used to perform factorization: petsc 
>>>>> total: nonzeros=517777, allocated nonzeros=517777 
>>>>> total number of mallocs used during MatSetValues calls =0 
>>>>> using I-node routines: found 3476 nodes, limit used is 5 
>>>>> linear system matrix = precond matrix: 
>>>>> Matrix Object: 1 MPI processes 
>>>>> type: seqaij 
>>>>> rows=9015, cols=9015 
>>>>> total: nonzeros=517777, allocated nonzeros=517777 
>>>>> total number of mallocs used during MatSetValues calls =0 
>>>>> using I-node routines: found 3476 nodes, limit used is 5 
>>>>> linear system matrix = precond matrix: 
>>>>> Matrix Object: 8 MPI processes 
>>>>> type: mpiaij 
>>>>> rows=75000, cols=75000 
>>>>> total: nonzeros=4427800, allocated nonzeros=4427800 
>>>>> total number of mallocs used during MatSetValues calls =0 
>>>>> using I-node (on process 0) routines: found 3476 nodes, limit used is 5 
>>>>> linear system matrix = precond matrix: 
>>>>> Matrix Object: 8 MPI processes 
>>>>> type: mpiaij 
>>>>> rows=75000, cols=75000 
>>>>> total: nonzeros=4427800, allocated nonzeros=4427800 
>>>>> total number of mallocs used during MatSetValues calls =0 
>>>>> using I-node (on process 0) routines: found 3476 nodes, limit used is 5 
>>>>> 
>>>>> 
>>>>> 
>>>>> ----- Original Message ----- 
>>>>> From: "Barry Smith" < bsmith at mcs.anl.gov > 
>>>>> To: "PETSc users list" < petsc-users at mcs.anl.gov > 
>>>>> Sent: Tuesday, May 1, 2012 1:39:26 PM 
>>>>> Subject: Re: [petsc-users] Multigrid 
>>>>> 
>>>>> 
>>>>> On May 1, 2012, at 3:37 PM, Karthik Duraisamy wrote: 
>>>>> 
>>>>>> Hello, 
>>>>>> 
>>>>>> I have been using PETSc for a couple of years with good success, but lately as my linear problems have become stiffer (condition numbers of the order of 1.e20), I am looking to use better preconditioners. I tried using PCMG with all the default options (i.e., I just specified my preconditioner as PCMG and did not add any options to it) and I am immediately seeing better convergence. 
>>>>>> 
>>>>>> What I am not sure of is why? I would like to know more about the default parameters (the manual is not very explicit) and more importantly, want to know why it is working even when I haven't specified any grid levels and coarse grid operators. Any 
>>>>>> help in this regard will be appreciated. 
>>>>> 
>>>>> First run with -ksp_view to see what solver it is actually using. 
>>>>> 
>>>>> Barry 
>>>>> 
>>>>>> 
>>>>>> Also, ultimately I want to use algebraic multigrid so is PCML a better option than BoomerAMG? I tried BoomerAMG with mixed results. 
>>>>>> 
>>>>>> Thanks, 
>>>>>> Karthik 
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> -- 
>>>>>> 
>>>>>> ======================================= 
>>>>>> Karthik Duraisamy 
>>>>>> Assistant Professor (Consulting) 
>>>>>> Durand Building Rm 357 
>>>>>> Dept of Aeronautics and Astronautics 
>>>>>> Stanford University 
>>>>>> Stanford CA 94305 
>>>>>> 
>>>>>> Phone: 650-721-2835 
>>>>>> Web: www.stanford.edu/~dkarthik 
>>>>>> ======================================= 
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> -- 
>>>>> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. 
>>>>> -- Norbert Wiener 
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> -- 
>>>>> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. 
>>>>> -- Norbert Wiener 
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> -- 
>>>>> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. 
>>>>> -- Norbert Wiener 
>>>>> 
>>>> 
>>> 
>> 
> 



More information about the petsc-users mailing list