On Fri, Feb 17, 2012 at 6:38 AM, <span dir="ltr"><<a href="mailto:coco@dmi.unict.it">coco@dmi.unict.it</a>></span> wrote:<br><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Thank you very much for the answer, but some other doubts remain.<br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Date: Thu, 16 Feb 2012 13:39:11 -0600<br>
From: Matthew Knepley <<a href="mailto:knepley@gmail.com" target="_blank">knepley@gmail.com</a>><br>
Subject: Re: [petsc-users] Multigrid as a preconditioner<br>
To: PETSc users list <<a href="mailto:petsc-users@mcs.anl.gov" target="_blank">petsc-users@mcs.anl.gov</a>><br>
Message-ID:<br>
<<a href="mailto:CAMYG4Gmt_mn_hPVKOrbC452geco3Zz26SF05pGy6t7RPAq%2B0Zg@mail.gmail.com" target="_blank">CAMYG4Gmt_mn_<u></u>hPVKOrbC452geco3Zz26SF05pGy6t7<u></u>RPAq+0Zg@mail.gmail.com</a>><br>
Content-Type: text/plain; charset="iso-8859-1"<br>
<br>
On Thu, Feb 16, 2012 at 12:54 PM, <<a href="mailto:coco@dmi.unict.it" target="_blank">coco@dmi.unict.it</a>> wrote:<br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Dear list,<br>
<br>
I would like to parallelize a multigrid code by Petsc. I do not want to<br>
use the DMMG infrastructure, since it will be replaced in the next PETSc<br>
release. Therefore I preferred to use the multigrid as a preconditioner. In<br>
practice, I use the Richardson iteration, choosing the same matrix of the<br>
linear system as a preconditioner, so that I think the Richardson iteration<br>
should converge in only one iteration, and effectively it is like solving<br>
the whole linear system by the multigrid.<br>
<br>
</blockquote>
<br>
Your understanding of the Richardson iteration is flawed. You can consult<br>
Yousef Saad's book for the standard definition and anaysis.<br>
<br>
</blockquote>
<br>
I think my explanation was not so clear. What I would like to do is to use a preconditioned Richardson iteration:<br>
<br>
x^(n+1) = x^n + P^(-1) (f-A x^n)<br>
<br>
Choosing P=A, I should expect to obtain the exact solution at the first iteration. Then, the whole linear system is solved by the preconditioner method that I chose. Is it what Petsc would do?<br></blockquote><div><br></div>
<div>I am not sure what you mean by "Is it what Petsc would do?". PETSc does what you tell it to do. If you want it</div><div>to solve in one iteration, tell it to use LU, -ksp_type richardson -pc_type lu.</div>
<div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
As a first test, I tried to use a one-grid multigrid (then not a truly<br>
multigrid). I just set the coarse solver (by a standard KSPSolve), and it<br>
should be enough, because the multigrid starts already from the coarsest<br>
grid and then it does not need the smoother and the transfer operators.<br>
Unfortunately, the iteration scheme (which should be the Richardson<br>
scheme) converges with a reason=4 (KSP_CONVERGED_ITS) to a wrong solution.<br>
On the other hand, if I solve the whole problem with the standard KSPSolve<br>
(then withouth setting the multigrid as a preconditioner ...), it converges<br>
to the right solution with a reason=2.<br>
<br>
</blockquote>
<br>
Yes, give Richardson many more iterations, -ksp_max_it.<br>
<br>
Matt<br>
<br>
</blockquote>
<br>
I tried, but unfortunately nothing changed.<br>
Another strange phenomen is that, even with the standard KSP solver (which provides the right solution), if I use the flag -ksp_monitor, nothing is displayed. Is that an indicator of some troubles?<br></blockquote><div><br>
</div><div>1) You must call KSPSetFromOptions() to use command line options</div><div><br></div><div>2) Run with -ksp_monitor -ksp_view and send the results, or there is no way we can know what is happening.</div><div><br>
</div><div> Matt</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Thank you in advance.<br>
Armando<br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
I thought that the two methods should be the exactly the same method, and<br>
I do not understand why they provide different convergence results.<br>
<br>
Here is the relevant code:<br>
<br>
// Set the matrix of the linear system<br>
Mat Mcc;<br>
ierr=MatCreate(PETSC_COMM_**<u></u>WORLD,&Mcc); CHKERRQ(ierr);<br>
ierr=MatSetType(Mcc, MATMPIAIJ); CHKERRQ(ierr);<br>
ierr=MatSetSizes(Mcc,PETSC_**<u></u>DECIDE,PETSC_DECIDE,1000,1000)<u></u>**;<br>
CHKERRQ(ierr);<br>
ierr=setMatrix(Mcc); //It is a routine that set the values of the matrix<br>
Mcc<br>
<br>
// Set the ksp solver with the multigrid as a preconditioner<br>
KSP ksp, KspSolver;<br>
ierr = KSPCreate(PETSC_COMM_WORLD,&**<u></u>ksp);CHKERRQ(ierr);<br>
ierr = KSPSetType(ksp,KSPRICHARDSON);<br>
ierr = KSPGetPC(ksp,&pc);CHKERRQ(**<u></u>ierr);<br>
ierr = PCSetType(pc,PCMG);CHKERRQ(**<u></u>ierr);<br>
ierr = PCMGSetLevels(pc,1,&PETSC_**<u></u>COMM_WORLD);CHKERRQ(ierr);<br>
ierr = PCMGSetType(pc,PC_MG_**<u></u>MULTIPLICATIVE);CHKERRQ(ierr);<br>
ierr = PCMGGetCoarseSolve(pc,&**<u></u>kspCoarseSolve);CHKERRQ(ierr);<br>
ierr = KSPSetOperators(**<u></u>kspCoarseSolve,Mcc,Mcc,**<br>
DIFFERENT_NONZERO_PATTERN);**<u></u>CHKERRQ(ierr);<br>
ierr = KSPSetTolerances(**<u></u>kspCoarseSolve,1.e-12,PETSC_**<br>
DEFAULT,PETSC_DEFAULT,PETSC_**<u></u>DEFAULT);CHKERRQ(ierr);<br>
ierr = KSPSetOperators(ksp,Mcc,Mcc,**<u></u>DIFFERENT_NONZERO_PATTERN);**<br>
CHKERRQ(ierr);<br>
ierr = KSPSetTolerances(ksp,1.e-12,**<u></u>PETSC_DEFAULT,PETSC_DEFAULT,**<br>
PETSC_DEFAULT);CHKERRQ(ierr);<br>
ierr = KSPSolve(ksp,RHS,U);CHKERRQ(**<u></u>ierr);<br>
<br>
// Solve with the standard KSPSolve<br>
KSP ksp1;<br>
ierr = KSPCreate(PETSC_COMM_WORLD,&**<u></u>ksp1);CHKERRQ(ierr);<br>
ierr = KSPSetOperators(ksp1,Mcc,Mcc,*<u></u>*DIFFERENT_NONZERO_PATTERN);**<br>
CHKERRQ(ierr);<br>
ierr = KSPSetTolerances(ksp1,1.e-12/(<u></u>**2*nn123),PETSC_DEFAULT,<u></u>PETSC_**<br>
DEFAULT,PETSC_DEFAULT);**<u></u>CHKERRQ(ierr);<br>
ierr = KSPSolve(ksp1,RHS,U1);CHKERRQ(<u></u>**ierr);<br>
<br>
<br>
At the end, the Vector U and U1 are different.<br>
Thank you.<br>
<br>
Best regards,<br>
Armando<br>
<br>
<br>
</blockquote></blockquote>
<br>
</blockquote></div><br><br clear="all"><div><br></div>-- <br>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>
-- Norbert Wiener<br>