[petsc-users] Multigrid as a preconditioner

coco at dmi.unict.it coco at dmi.unict.it
Fri Feb 17 18:49:56 CST 2012


> Date: Fri, 17 Feb 2012 13:29:16 -0600
> From: Matthew Knepley <knepley at gmail.com>
> Subject: Re: [petsc-users] petsc-users Digest, Vol 38, Issue 41
> To: PETSc users list <petsc-users at mcs.anl.gov>
> Message-ID:
> 	<CAMYG4G=pQPtg_yM=im0ZZv73hfcbanKvW3vuzXj3zCTE80mR7g at mail.gmail.com>
> Content-Type: text/plain; charset="iso-8859-1"
>
> On Fri, Feb 17, 2012 at 1:09 PM, <coco at dmi.unict.it> wrote:
>
>>
>> Thank you for the answer.
>>
>>  Date: Fri, 17 Feb 2012 10:11:51 -0600
>>> From: Matthew Knepley <knepley at gmail.com>
>>> Subject: Re: [petsc-users] Multigrid as a preconditioner
>>> To: PETSc users list <petsc-users at mcs.anl.gov>
>>> Message-ID:
>>>        <**CAMYG4GkKE6doSQ1FgHCSr1deZRYS0**kpvLQF8daqv0au==tOgyw at mail.**
>>> gmail.com <tOgyw at mail.gmail.com>>
>>> Content-Type: text/plain; charset="iso-8859-1"
>>>
>>> On Fri, Feb 17, 2012 at 6:38 AM, <coco at dmi.unict.it> wrote:
>>>
>>>  Thank you very much for the answer, but some other doubts remain.
>>>>
>>>> Date: Thu, 16 Feb 2012 13:39:11 -0600
>>>>
>>>>> From: Matthew Knepley <knepley at gmail.com>
>>>>> Subject: Re: [petsc-users] Multigrid as a preconditioner
>>>>> To: PETSc users list <petsc-users at mcs.anl.gov>
>>>>> Message-ID:
>>>>>      <CAMYG4Gmt_mn_****hPVKOrbC452geco3Zz26SF05pGy6t7****
>>>>> RPAq+0Zg at mail.gmail.com<CAMYG4**Gmt_mn_**hPVKOrbC452geco3Zz26SF05pGy6t7
>>>>> **RPAq%2B0Zg at mail.gmail.com<CAMYG4Gmt_mn_hPVKOrbC452geco3Zz26SF05pGy6t7RPAq%252B0Zg at mail.gmail.com>
>>>>> >
>>>>>
>>>>>>
>>>>>>  Content-Type: text/plain; charset="iso-8859-1"
>>>>>
>>>>> On Thu, Feb 16, 2012 at 12:54 PM, <coco at dmi.unict.it> wrote:
>>>>>
>>>>> Dear list,
>>>>>
>>>>>>
>>>>>> I would like to parallelize a multigrid code by Petsc. I do not want to
>>>>>> use the DMMG infrastructure, since it will be replaced in the next
>>>>>> PETSc
>>>>>> release. Therefore I preferred to use the multigrid as a
>>>>>> preconditioner.
>>>>>> In
>>>>>> practice, I use the Richardson iteration, choosing the same matrix of
>>>>>> the
>>>>>> linear system as a preconditioner, so that I think the Richardson
>>>>>> iteration
>>>>>> should converge in only one iteration, and effectively it is like
>>>>>> solving
>>>>>> the whole linear system by the multigrid.
>>>>>>
>>>>>>
>>>>>>  Your understanding of the Richardson iteration is flawed. You can
>>>>> consult
>>>>> Yousef Saad's book for the standard definition and anaysis.
>>>>>
>>>>>
>>>>>  I think my explanation was not so clear. What I would like to do is to
>>>> use
>>>> a preconditioned Richardson iteration:
>>>>
>>>> x^(n+1) = x^n + P^(-1) (f-A x^n)
>>>>
>>>> Choosing P=A, I should expect to obtain the exact solution at the first
>>>> iteration. Then, the whole linear system is solved by the preconditioner
>>>> method that I chose. Is it what Petsc would do?
>>>>
>>>>
>>> I am not sure what you mean by "Is it what Petsc would do?". PETSc does
>>> what you tell it to do. If you want it
>>> to solve in one iteration, tell it to use LU, -ksp_type richardson
>>> -pc_type
>>> lu.
>>>
>>>
>> Indeed I would like to solve the whole linear system by a multigrid
>> approach and not by a lu factorization. Therefore I would like to use
>> -ksp_type richardson -pc_type mg.
>> In this case, the preconditioned problem P^(-1) (f-A x^n) is solved
>> exactly or it performs just a V-cycle iteration? In both cases, since I am
>> using a one-grid multigrid (just for debugging), it should anyway provide
>> the exact solution at the first iteration, but it is not so.
>>
>
> Great, however I have no idea what you are actually using if you do not
> send what I asked for in the last message.
>
>    Matt
>
>

This is the output of the options: -ksp_monitor -ksp_view

0 KSP Residual norm 1.393931159942e+00
[...]
10000 KSP Residual norm 1.000095856678e+00
KSP Object:
   type: richardson
     Richardson: damping factor=1
   maximum iterations=10000, initial guess is zero
   tolerances:  relative=3.75657e-16, absolute=1e-50, divergence=10000
   left preconditioning
   using PRECONDITIONED norm type for convergence test
PC Object:
   type: mg
     MG: type is MULTIPLICATIVE, levels=1 cycles=v
       Cycles per PCApply=1
   Coarse grid solver -- level 0 presmooths=1 postsmooths=1 -----
     KSP Object:(mg_levels_0_)
       type: gmres
         GMRES: restart=30, using Classical (unmodified) Gram-Schmidt  
Orthogonalization with no iterative refinement
         GMRES: happy breakdown tolerance 1e-30
       maximum iterations=1, initial guess is zero
       tolerances:  relative=3.75657e-16, absolute=1e-50, divergence=10000
       left preconditioning
       using PRECONDITIONED norm type for convergence test
     PC Object:(mg_levels_0_)
       type: none
       linear system matrix = precond matrix:
       Matrix Object:
         type=shell, rows=2662, cols=2662
   linear system matrix = precond matrix:
   Matrix Object:
     type=mpiaij, rows=2662, cols=2662
     total: nonzeros=12160, allocated nonzeros=31324
       not using I-node (on process 0) routines


Thank you.
Armando

>>
>>>
>>>>  As a first test, I tried to use a one-grid multigrid (then not a truly
>>>>>
>>>>>> multigrid). I just set the coarse solver (by a standard KSPSolve), and
>>>>>> it
>>>>>> should be enough, because the multigrid starts already from the
>>>>>> coarsest
>>>>>> grid and then it does not need the smoother and the transfer operators.
>>>>>> Unfortunately, the iteration scheme (which should be the Richardson
>>>>>> scheme) converges with a reason=4 (KSP_CONVERGED_ITS) to a wrong
>>>>>> solution.
>>>>>> On the other hand, if I solve the whole problem with the standard
>>>>>> KSPSolve
>>>>>> (then withouth setting the multigrid as a preconditioner ...), it
>>>>>> converges
>>>>>> to the right solution with a reason=2.
>>>>>>
>>>>>>
>>>>>>  Yes, give Richardson many more iterations, -ksp_max_it.
>>>>>
>>>>>  Matt
>>>>>
>>>>>
>>>>>  I tried, but unfortunately nothing changed.
>>>> Another strange phenomen is that, even with the standard KSP solver
>>>> (which
>>>> provides the right solution), if I use the flag -ksp_monitor, nothing is
>>>> displayed. Is that an indicator of some troubles?
>>>>
>>>>
>>> 1) You must call KSPSetFromOptions() to use command line options
>>>
>>> 2) Run with -ksp_monitor -ksp_view and send the results, or there is no
>>> way
>>> we can know what is happening.
>>>
>>>  Matt
>>>
>>>
>> Thank you. I plotted the residual and it decreases so much slowly. It
>> seems like it is using the non-preconditioned Richardson iteration x^(n+1)
>> = x^n + P^(-1) (f-A x^n) with P=I instead of P=A.
>>
>> Thank you.
>> Armando
>>
>>
>>>  Thank you in advance.
>>>> Armando
>>>>
>>>>
>>>>  I thought that the two methods should be the exactly the same method,
>>>>> and
>>>>>
>>>>>> I do not understand why they provide different convergence results.
>>>>>>
>>>>>> Here is the relevant code:
>>>>>>
>>>>>> // Set the matrix of the linear system
>>>>>> Mat Mcc;
>>>>>> ierr=MatCreate(PETSC_COMM_******WORLD,&Mcc); CHKERRQ(ierr);
>>>>>> ierr=MatSetType(Mcc, MATMPIAIJ); CHKERRQ(ierr);
>>>>>> ierr=MatSetSizes(Mcc,PETSC_******DECIDE,PETSC_DECIDE,1000,**1000)****;
>>>>>> CHKERRQ(ierr);
>>>>>> ierr=setMatrix(Mcc); //It is a routine that set the values of the
>>>>>> matrix
>>>>>> Mcc
>>>>>>
>>>>>> // Set the ksp solver with the multigrid as a preconditioner
>>>>>> KSP ksp, KspSolver;
>>>>>> ierr = KSPCreate(PETSC_COMM_WORLD,&******ksp);CHKERRQ(ierr);
>>>>>> ierr = KSPSetType(ksp,KSPRICHARDSON);
>>>>>> ierr = KSPGetPC(ksp,&pc);CHKERRQ(******ierr);
>>>>>> ierr = PCSetType(pc,PCMG);CHKERRQ(******ierr);
>>>>>> ierr = PCMGSetLevels(pc,1,&PETSC_******COMM_WORLD);CHKERRQ(ierr);
>>>>>> ierr = PCMGSetType(pc,PC_MG_******MULTIPLICATIVE);CHKERRQ(ierr);
>>>>>> ierr = PCMGGetCoarseSolve(pc,&******kspCoarseSolve);CHKERRQ(ierr);
>>>>>> ierr = KSPSetOperators(******kspCoarseSolve,Mcc,Mcc,**
>>>>>> DIFFERENT_NONZERO_PATTERN);******CHKERRQ(ierr);
>>>>>> ierr = KSPSetTolerances(******kspCoarseSolve,1.e-12,PETSC_**
>>>>>> DEFAULT,PETSC_DEFAULT,PETSC_******DEFAULT);CHKERRQ(ierr);
>>>>>> ierr = KSPSetOperators(ksp,Mcc,Mcc,******DIFFERENT_NONZERO_PATTERN);**
>>>>>> **
>>>>>> CHKERRQ(ierr);
>>>>>> ierr = KSPSetTolerances(ksp,1.e-12,******PETSC_DEFAULT,PETSC_DEFAULT,*
>>>>>> ***
>>>>>> PETSC_DEFAULT);CHKERRQ(ierr);
>>>>>> ierr = KSPSolve(ksp,RHS,U);CHKERRQ(******ierr);
>>>>>>
>>>>>> // Solve with the standard KSPSolve
>>>>>> KSP ksp1;
>>>>>> ierr = KSPCreate(PETSC_COMM_WORLD,&******ksp1);CHKERRQ(ierr);
>>>>>> ierr = KSPSetOperators(ksp1,Mcc,Mcc,******DIFFERENT_NONZERO_PATTERN);*
>>>>>> ***
>>>>>> CHKERRQ(ierr);
>>>>>> ierr = KSPSetTolerances(ksp1,1.e-12/(******2*nn123),PETSC_DEFAULT,**
>>>>>> PETSC_**
>>>>>> DEFAULT,PETSC_DEFAULT);******CHKERRQ(ierr);
>>>>>> ierr = KSPSolve(ksp1,RHS,U1);CHKERRQ(******ierr);
>>>>>>
>>>>>>
>>>>>> At the end, the Vector U and U1 are different.
>>>>>> Thank you.
>>>>>>
>>>>>> Best regards,
>>>>>> Armando
>>>>>>
>>>>>>
>>>>>>
>>>>>>



More information about the petsc-users mailing list