[petsc-users] Richardson with multiplicative MG vs. full MG
John Fettig
john.fettig at gmail.com
Mon Aug 13 19:57:48 CDT 2012
I really like the idea of PCMGRegisterCycle().
Thanks,
John
On Mon, Aug 13, 2012 at 6:55 PM, Jed Brown <jedbrown at mcs.anl.gov> wrote:
> Should we add PCMGRegisterCycle() and/or provide a more flexible way to
> specify cycle index?
>
> On Mon, Aug 13, 2012 at 4:53 PM, Barry Smith <bsmith at mcs.anl.gov> wrote:
>>
>>
>> John,
>>
>> You are right we do not have completely optimized forms of all
>> variants of the multigrid algorithms.
>>
>> You could make a modification to the routine
>>
>> #undef __FUNCT__
>> #define __FUNCT__ "PCMGMCycle_Private"
>> PetscErrorCode PCMGMCycle_Private(PC pc,PC_MG_Levels
>> **mglevelsin,PCRichardsonConvergedReason *reason)
>> {
>> PC_MG *mg = (PC_MG*)pc->data;
>> PC_MG_Levels *mgc,*mglevels = *mglevelsin;
>> PetscErrorCode ierr;
>> PetscInt cycles = (mglevels->level == 1) ? 1 : (PetscInt)
>> mglevels->cycles;
>>
>> PetscFunctionBegin;
>>
>> if (mglevels->eventsmoothsolve) {ierr =
>> PetscLogEventBegin(mglevels->eventsmoothsolve,0,0,0,0);CHKERRQ(ierr);}
>> ierr =
>> KSPSolve(mglevels->smoothd,mglevels->b,mglevels->x);CHKERRQ(ierr); /*
>> pre-smooth */
>> if (mglevels->eventsmoothsolve) {ierr =
>> PetscLogEventEnd(mglevels->eventsmoothsolve,0,0,0,0);CHKERRQ(ierr);}
>> if (mglevels->level) { /* not the coarsest grid */
>> if (mglevels->eventresidual) {ierr =
>> PetscLogEventBegin(mglevels->eventresidual,0,0,0,0);CHKERRQ(ierr);}
>> ierr =
>> (*mglevels->residual)(mglevels->A,mglevels->b,mglevels->x,mglevels->r);CHKERRQ(ierr);
>> if (mglevels->eventresidual) {ierr =
>> PetscLogEventEnd(mglevels->eventresidual,0,0,0,0);CHKERRQ(ierr);}
>>
>> /* if on finest level and have convergence criteria set */
>> if (mglevels->level == mglevels->levels-1 && mg->ttol && reason) {
>> PetscReal rnorm;
>> ierr = VecNorm(mglevels->r,NORM_2,&rnorm);CHKERRQ(ierr);
>> if (rnorm <= mg->ttol) {
>> if (rnorm < mg->abstol) {
>> *reason = PCRICHARDSON_CONVERGED_ATOL;
>> ierr = PetscInfo2(pc,"Linear solver has converged. Residual norm
>> %G is less than absolute tolerance %G\n",rnorm,mg->abstol);CHKERRQ(ierr);
>> } else {
>> *reason = PCRICHARDSON_CONVERGED_RTOL;
>> ierr = PetscInfo2(pc,"Linear solver has converged. Residual norm
>> %G is less than relative tolerance times initial residual norm
>> %G\n",rnorm,mg->ttol);CHKERRQ(ierr);
>> }
>> PetscFunctionReturn(0);
>> }
>> }
>>
>> mgc = *(mglevelsin - 1);
>> if (mglevels->eventinterprestrict) {ierr =
>> PetscLogEventBegin(mglevels->eventinterprestrict,0,0,0,0);CHKERRQ(ierr);}
>> ierr =
>> MatRestrict(mglevels->restrct,mglevels->r,mgc->b);CHKERRQ(ierr);
>> if (mglevels->eventinterprestrict) {ierr =
>> PetscLogEventEnd(mglevels->eventinterprestrict,0,0,0,0);CHKERRQ(ierr);}
>> ierr = VecSet(mgc->x,0.0);CHKERRQ(ierr);
>> while (cycles--) {
>> ierr = PCMGMCycle_Private(pc,mglevelsin-1,reason);CHKERRQ(ierr);
>> }
>> if (mglevels->eventinterprestrict) {ierr =
>> PetscLogEventBegin(mglevels->eventinterprestrict,0,0,0,0);CHKERRQ(ierr);}
>> ierr =
>> MatInterpolateAdd(mglevels->interpolate,mgc->x,mglevels->x,mglevels->x);CHKERRQ(ierr);
>> if (mglevels->eventinterprestrict) {ierr =
>> PetscLogEventEnd(mglevels->eventinterprestrict,0,0,0,0);CHKERRQ(ierr);}
>> if (mglevels->eventsmoothsolve) {ierr =
>> PetscLogEventBegin(mglevels->eventsmoothsolve,0,0,0,0);CHKERRQ(ierr);}
>> ierr =
>> KSPSolve(mglevels->smoothu,mglevels->b,mglevels->x);CHKERRQ(ierr); /*
>> post smooth */
>> if (mglevels->eventsmoothsolve) {ierr =
>> PetscLogEventEnd(mglevels->eventsmoothsolve,0,0,0,0);CHKERRQ(ierr);}
>> }
>> PetscFunctionReturn(0);
>> }
>>
>> to remove the unneeded computations.
>>
>> Barry
>>
>>
>>
>> On Aug 13, 2012, at 3:39 PM, John Fettig <john.fettig at gmail.com> wrote:
>>
>> > What if you wanted to do a full cycle with no pre-smooths instead of a
>> > v-cycle?
>> >
>> > John
>> >
>> > On Mon, Aug 13, 2012 at 4:34 PM, Barry Smith <bsmith at mcs.anl.gov> wrote:
>> >>
>> >> #undef __FUNCT__
>> >> #define __FUNCT__ "PCMGKCycle_Private"
>> >> PetscErrorCode PCMGKCycle_Private(PC pc,PC_MG_Levels **mglevels)
>> >> {
>> >> PetscErrorCode ierr;
>> >> PetscInt i,l = mglevels[0]->levels;
>> >>
>> >> PetscFunctionBegin;
>> >> /* restrict the RHS through all levels to coarsest. */
>> >> for (i=l-1; i>0; i--){
>> >> if (mglevels[i]->eventinterprestrict) {ierr =
>> >> PetscLogEventBegin(mglevels[i]->eventinterprestrict,0,0,0,0);CHKERRQ(ierr);}
>> >> ierr =
>> >> MatRestrict(mglevels[i]->restrct,mglevels[i]->b,mglevels[i-1]->b);CHKERRQ(ierr);
>> >> if (mglevels[i]->eventinterprestrict) {ierr =
>> >> PetscLogEventEnd(mglevels[i]->eventinterprestrict,0,0,0,0);CHKERRQ(ierr);}
>> >> }
>> >>
>> >> /* work our way up through the levels */
>> >> ierr = VecSet(mglevels[0]->x,0.0);CHKERRQ(ierr);
>> >> for (i=0; i<l-1; i++) {
>> >> if (mglevels[i]->eventsmoothsolve) {ierr =
>> >> PetscLogEventBegin(mglevels[i]->eventsmoothsolve,0,0,0,0);CHKERRQ(ierr);}
>> >> ierr =
>> >> KSPSolve(mglevels[i]->smoothd,mglevels[i]->b,mglevels[i]->x);CHKERRQ(ierr);
>> >> if (mglevels[i]->eventsmoothsolve) {ierr =
>> >> PetscLogEventEnd(mglevels[i]->eventsmoothsolve,0,0,0,0);CHKERRQ(ierr);}
>> >> if (mglevels[i+1]->eventinterprestrict) {ierr =
>> >> PetscLogEventBegin(mglevels[i+1]->eventinterprestrict,0,0,0,0);CHKERRQ(ierr);}
>> >> ierr =
>> >> MatInterpolate(mglevels[i+1]->interpolate,mglevels[i]->x,mglevels[i+1]->x);CHKERRQ(ierr);
>> >> if (mglevels[i+1]->eventinterprestrict) {ierr =
>> >> PetscLogEventEnd(mglevels[i+1]->eventinterprestrict,0,0,0,0);CHKERRQ(ierr);}
>> >> }
>> >> if (mglevels[l-1]->eventsmoothsolve) {ierr =
>> >> PetscLogEventBegin(mglevels[l-1]->eventsmoothsolve,0,0,0,0);CHKERRQ(ierr);}
>> >> ierr =
>> >> KSPSolve(mglevels[l-1]->smoothd,mglevels[l-1]->b,mglevels[l-1]->x);CHKERRQ(ierr);
>> >> if (mglevels[l-1]->eventsmoothsolve) {ierr =
>> >> PetscLogEventEnd(mglevels[l-1]->eventsmoothsolve,0,0,0,0);CHKERRQ(ierr);}
>> >>
>> >> PetscFunctionReturn(0);
>> >> }
>> >>
>> >>
>> >> On Aug 13, 2012, at 3:19 PM, Jed Brown <jedbrown at mcs.anl.gov> wrote:
>> >>
>> >>> Shorthand for this is -pc_mg_type kaskade.
>> >>>
>> >>> On Mon, Aug 13, 2012 at 1:01 PM, John Fettig <john.fettig at gmail.com>
>> >>> wrote:
>> >>> Barry,
>> >>>
>> >>> Thank you for answering my question. I have another one for you: it
>> >>> seems the special case of zero pre-smooths is somewhat non-trivial.
>> >>> The best I can do is set the pre-smoother to Richardson with PCNONE
>> >>> and zero as max_its. However, if you aren't careful in setting
>> >>> KSPSetInitialGuessNonzero this can have unexpected results since the
>> >>> generic KSPSolve will clobber your solution before it even tries a
>> >>> convergence criteria (thus ruling out KSPPREONLY). It also does a
>> >>> couple of unnecessary residual calculations. Would it be reasonable to
>> >>> put a zero-iteration special case in KSPSolve so that if you don't
>> >>> want any iterations it doesn't actually do anything (no setup, no
>> >>> preconditioner, no residual, no scaling, etc.)?
>> >>>
>> >>> Thanks,
>> >>> John
>> >>>
>> >>> On Thu, Aug 9, 2012 at 6:37 PM, Barry Smith <bsmith at mcs.anl.gov>
>> >>> wrote:
>> >>>>
>> >>>> John,
>> >>>>
>> >>>> On Aug 9, 2012, at 9:50 AM, John Fettig <john.fettig at gmail.com>
>> >>>> wrote:
>> >>>>
>> >>>>> I am a little confused about what Richardson means. If you use
>> >>>>> multiplicative V-cycle multigrid with Richardson KSP (and no
>> >>>>> convergence monitor), it sets the applyrichardson operator to
>> >>>>> PCApplyRichardson_MG, which appears to just run V-cycles until
>> >>>>> convergence.
>> >>>>
>> >>>> Yes, this is correct.
>> >>>>
>> >>>>> As far as I can tell, it doesn't ever update according
>> >>>>> to the documented
>> >>>>>
>> >>>>> x^{n+1} = x^{n} + scale*B(b - A x^{n})
>> >>>>>
>> >>>> In exact arithmetic it is actually "implicitly" doing exactly
>> >>>> this update. It is difficult to see why this is true generally (because B
>> >>>> is rather complicated for multigrid) but if you consider only two levels
>> >>>> with a direct solver on the coarse grid and SSOR as the pre and post smooth
>> >>>> you can write out the formulas and map back and forth between the two forms.
>> >>>> The reason for the PCApplyRichardson_ forms is because they are a bit more
>> >>>> efficient than separating out the action of B and then doing the update as
>> >>>> above.
>> >>>>
>> >>>>
>> >>>>> If on the other hand you use full MG, it does update according to
>> >>>>> the
>> >>>>> above formula. This also happens if you set a convergence monitor.
>> >>>>>
>> >>>>> I can see how multiplicative V-cycle with Richardson is simply using
>> >>>>> multigrid as a solver. What I don't understand is how full MG with
>> >>>>> Richardson is using multigrid as a solver, because it is using the
>> >>>>> update formula above in between cycles.. Shouldn't there be a
>> >>>>> applyrichardson for full multigrid as well? If not, why?
>> >>>>
>> >>>> I think there could be a applyRichardson for full multigrid but it
>> >>>> would be kind of complicated and would not benefit much because the amount
>> >>>> of work in a full multigrid step is much higher so the savings would be a
>> >>>> much lower percentage than with V cycle.
>> >>>>
>> >>>> Barry
>> >>>>
>> >>>>>
>> >>>>> Thanks,
>> >>>>> John
>> >>>>
>> >>>
>> >>
>>
>
More information about the petsc-users
mailing list