[petsc-users] GMRES -> PCMG -> PCASM pre- post- smoother
Barry Smith
bsmith at mcs.anl.gov
Thu Aug 20 23:54:58 CDT 2015
Ahhh,
void AsmPetscLinearEquationSolver::MGsolve ( const bool ksp_clean , const unsigned &npre, const unsigned &npost ) {
if ( ksp_clean ) {
PetscMatrix* KKp = static_cast< PetscMatrix* > ( _KK );
Mat KK = KKp->mat();
KSPSetOperators ( _ksp, KK, _Pmat );
KSPSetTolerances ( _ksp, _rtol, _abstol, _dtol, _maxits );
KSPSetFromOptions ( _ksp );
PC pcMG;
KSPGetPC(_ksp, &pcMG);
PCMGSetNumberSmoothDown(pcMG, npre);
PCMGSetNumberSmoothUp(pcMG, npost);
}
PetscErrorCode PCMGSetNumberSmoothDown(PC pc,PetscInt n)
{
PC_MG *mg = (PC_MG*)pc->data;
PC_MG_Levels **mglevels = mg->levels;
PetscErrorCode ierr;
PetscInt i,levels;
PetscFunctionBegin;
PetscValidHeaderSpecific(pc,PC_CLASSID,1);
if (!mglevels) SETERRQ(PetscObjectComm((PetscObject)pc),PETSC_ERR_ARG_WRONGSTATE,"Must set MG levels before calling");
PetscValidLogicalCollectiveInt(pc,n,2);
levels = mglevels[0]->levels;
for (i=1; i<levels; i++) {
/* make sure smoother up and down are different */
ierr = PCMGGetSmootherUp(pc,i,NULL);CHKERRQ(ierr);
ierr = KSPSetTolerances(mglevels[i]->smoothd,PETSC_DEFAULT,PETSC_DEFAULT,PETSC_DEFAULT,n);CHKERRQ(ierr);
mg->default_smoothd = n;
}
PetscFunctionReturn(0);
}
PetscErrorCode PCMGGetSmootherUp(PC pc,PetscInt l,KSP *ksp)
{
PC_MG *mg = (PC_MG*)pc->data;
PC_MG_Levels **mglevels = mg->levels;
PetscErrorCode ierr;
const char *prefix;
MPI_Comm comm;
PetscFunctionBegin;
PetscValidHeaderSpecific(pc,PC_CLASSID,1);
/*
This is called only if user wants a different pre-smoother from post.
Thus we check if a different one has already been allocated,
if not we allocate it.
*/
if (!l) SETERRQ(PetscObjectComm((PetscObject)pc),PETSC_ERR_ARG_OUTOFRANGE,"There is no such thing as a up smoother on the coarse grid");
if (mglevels[l]->smoothu == mglevels[l]->smoothd) {
KSPType ksptype;
PCType pctype;
PC ipc;
PetscReal rtol,abstol,dtol;
PetscInt maxits;
KSPNormType normtype;
ierr = PetscObjectGetComm((PetscObject)mglevels[l]->smoothd,&comm);CHKERRQ(ierr);
ierr = KSPGetOptionsPrefix(mglevels[l]->smoothd,&prefix);CHKERRQ(ierr);
ierr = KSPGetTolerances(mglevels[l]->smoothd,&rtol,&abstol,&dtol,&maxits);CHKERRQ(ierr);
ierr = KSPGetType(mglevels[l]->smoothd,&ksptype);CHKERRQ(ierr);
ierr = KSPGetNormType(mglevels[l]->smoothd,&normtype);CHKERRQ(ierr);
ierr = KSPGetPC(mglevels[l]->smoothd,&ipc);CHKERRQ(ierr);
ierr = PCGetType(ipc,&pctype);CHKERRQ(ierr);
ierr = KSPCreate(comm,&mglevels[l]->smoothu);CHKERRQ(ierr);
ierr = KSPSetErrorIfNotConverged(mglevels[l]->smoothu,pc->erroriffailure);CHKERRQ(ierr);
ierr = PetscObjectIncrementTabLevel((PetscObject)mglevels[l]->smoothu,(PetscObject)pc,mglevels[0]->levels-l);CHKERRQ(ierr);
ierr = KSPSetOptionsPrefix(mglevels[l]->smoothu,prefix);CHKERRQ(ierr);
ierr = KSPSetTolerances(mglevels[l]->smoothu,rtol,abstol,dtol,maxits);CHKERRQ(ierr);
ierr = KSPSetType(mglevels[l]->smoothu,ksptype);CHKERRQ(ierr);
ierr = KSPSetNormType(mglevels[l]->smoothu,normtype);CHKERRQ(ierr);
ierr = KSPSetConvergenceTest(mglevels[l]->smoothu,KSPConvergedSkip,NULL,NULL);CHKERRQ(ierr);
ierr = KSPGetPC(mglevels[l]->smoothu,&ipc);CHKERRQ(ierr);
ierr = PCSetType(ipc,pctype);CHKERRQ(ierr);
ierr = PetscLogObjectParent((PetscObject)pc,(PetscObject)mglevels[l]->smoothu);CHKERRQ(ierr);
}
if (ksp) *ksp = mglevels[l]->smoothu;
PetscFunctionReturn(0);
}
As soon as you set both the up and down number of iterations it causes a duplication of the current smoother with some options preserved but others not (we don't have a KSPDuplicate() that duplicates everything).
So if you are fine with the number of pre and post smooths the same just don't set both
PCMGSetNumberSmoothDown(pcMG, npre);
PCMGSetNumberSmoothUp(pcMG, npost);
if you want them to be different you can share the same PC between the two (which has the overlapping matrices in it) but you cannot share the same KSP. I can tell you how to do that but suggest it is simpler just to have the same number of pre and post smooths
Barry
> On Aug 20, 2015, at 6:51 AM, Aulisa, Eugenio <eugenio.aulisa at ttu.edu> wrote:
>
> Hi Barry,
>
> Thanks for your answer.
>
> I run my applications with no command line, and I do not think I changed any PETSC_OPTIONS,
> at least not voluntarily.
>
> For the source it is available on
> https://github.com/NumPDEClassTTU/femus
> but it is part of a much larger library and
> I do not think any of you want to install and run it
> just to find what I messed up.
>
> In any case, if you just want to look at the source code
> where I set up the level smoother it is in
>
> https://github.com/NumPDEClassTTU/femus/blob/master/src/algebra/AsmPetscLinearEquationSolver.cpp
>
> line 400
>
> void AsmPetscLinearEquationSolver::MGsetLevels (
> LinearEquationSolver *LinSolver, const unsigned &level, const unsigned &levelMax,
> const vector <unsigned> &variable_to_be_solved, SparseMatrix* PP, SparseMatrix* RR ){
>
> Be aware, that even if it seams that this takes care of the coarse level it is not.
> The coarse level smoother is set some where else.
>
> Thanks,
> Eugenio
>
> ________________________________________
> From: Barry Smith [bsmith at mcs.anl.gov]
> Sent: Thursday, August 20, 2015 2:37 AM
> To: Aulisa, Eugenio
> Cc: petsc-users at mcs.anl.gov
> Subject: Re: [petsc-users] GMRES -> PCMG -> PCASM pre- post- smoother
>
> What you describe is not the expected behavior. I expected exactly the result that you expected.
>
> Do you perhaps have some PETSc options around that may be changing the post-smoother? On the command line or in the file petscrc or in the environmental variable PETSC_OPTIONS? Can you send us some code that we could run that reproduces the problem?
>
> Barry
>
>> On Aug 19, 2015, at 9:26 PM, Aulisa, Eugenio <eugenio.aulisa at ttu.edu> wrote:
>>
>> Hi,
>>
>> I am solving an iteration of
>>
>> GMRES -> PCMG -> PCASM
>>
>> where I build my particular ASM domain decomposition.
>>
>> In setting the PCMG I would like at each level
>> to use the same pre- and post-smoother
>> and for this reason I am using
>> ...
>> PCMGGetSmoother ( pcMG, level , &subksp );
>>
>> to extract and set at each level the ksp object.
>>
>> In setting PCASM then I use
>> ...
>> KSPGetPC ( subksp, &subpc );
>> PCSetType ( subpc, PCASM );
>> ...
>> and then set my own decomposition
>> ...
>> PCASMSetLocalSubdomains(subpc,_is_loc_idx.size(),&_is_ovl[0],&_is_loc[0]);
>> ...
>>
>> Now everything compiles, and runs with no memory leakage,
>> but I do not get the expected convergence.
>>
>> When I checked the output of -ksp_view, I saw something that puzzled me:
>> at each level >0, while in the MG pre-smoother the ASM domain decomposition
>> is the one that I set, for example with 4 processes I get
>>
>>>>>>>>>>>>>>>>>>>>>
>> ...
>> Down solver (pre-smoother) on level 2 -------------------------------
>> KSP Object: (level-2) 4 MPI processes
>> type: gmres
>> GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement
>> GMRES: happy breakdown tolerance 1e-30
>> maximum iterations=1
>> using preconditioner applied to right hand side for initial guess
>> tolerances: relative=1e-12, absolute=1e-20, divergence=1e+50
>> left preconditioning
>> using nonzero initial guess
>> using NONE norm type for convergence test
>> PC Object: (level-2) 4 MPI processes
>> type: asm
>> Additive Schwarz: total subdomain blocks = 198, amount of overlap = 0
>> Additive Schwarz: restriction/interpolation type - RESTRICT
>> [0] number of local blocks = 52
>> [1] number of local blocks = 48
>> [2] number of local blocks = 48
>> [3] number of local blocks = 50
>> Local solve info for each block is in the following KSP and PC objects:
>> - - - - - - - - - - - - - - - - - -
>> ...
>>>>>>>>>>>>>
>>
>>
>> in the post-smoother I have the default ASM decomposition with overlapping 1:
>>
>>
>>>>>>>>>>>>>
>> ...
>> Up solver (post-smoother) on level 2 -------------------------------
>> KSP Object: (level-2) 4 MPI processes
>> type: gmres
>> GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement
>> GMRES: happy breakdown tolerance 1e-30
>> maximum iterations=2
>> tolerances: relative=1e-12, absolute=1e-20, divergence=1e+50
>> left preconditioning
>> using nonzero initial guess
>> using NONE norm type for convergence test
>> PC Object: (level-2) 4 MPI processes
>> type: asm
>> Additive Schwarz: total subdomain blocks = 4, amount of overlap = 1
>> Additive Schwarz: restriction/interpolation type - RESTRICT
>> Local solve is same for all blocks, in the following KSP and PC objects:
>> KSP Object: (level-2sub_) 1 MPI processes
>> type: preonly
>> maximum iterations=10000, initial guess is zero
>> tolerances: relative=1e-05, absolute=1e-50, divergence=10000
>> left preconditioning
>> ...
>>>>>>>>>>>>>>>
>> %%%%%%%%%%%%%%%%%%%%%%%%
>>
>> So it seams that by using
>>
>> PCMGGetSmoother ( pcMG, level , &subksp );
>>
>> I was capable to set both the pre- and post- smoothers to be PCASM
>> but everything I did after that applied only to the
>> pre-smoother, while the post-smoother got the default PCASM options.
>>
>> I know that I can use
>> PCMGGetSmootherDown and PCMGGetSmootherUp, but that would
>> probably double the memory allocation and the computational time in the ASM.
>>
>> Is there any way I can just use PCMGGetSmoother
>> and use the same PCASM in the pre- and post- smoother?
>>
>> I hope I was clear enough.
>>
>> Thanks a lot for your help,
>> Eugenio
>>
>>
>>
>
More information about the petsc-users
mailing list