[petsc-users] GMRES -> PCMG -> PCASM pre- post- smoother

Aulisa, Eugenio eugenio.aulisa at ttu.edu
Thu Aug 20 06:51:17 CDT 2015


Hi Barry,

Thanks for your answer.

I run my applications with no command line, and I do not think I changed any PETSC_OPTIONS,
at least not voluntarily.

For the source it is available on 
https://github.com/NumPDEClassTTU/femus
but it is part of a much larger library and 
I do not think any of you want to install and run it
just to find what I messed up.

In any case, if you just want to look at the source  code
where I set up the level smoother  it is in

https://github.com/NumPDEClassTTU/femus/blob/master/src/algebra/AsmPetscLinearEquationSolver.cpp
 
line 400

void AsmPetscLinearEquationSolver::MGsetLevels (
    LinearEquationSolver *LinSolver, const unsigned &level, const unsigned &levelMax,
    const vector <unsigned> &variable_to_be_solved, SparseMatrix* PP, SparseMatrix* RR ){

Be aware, that even if it seams that this takes care of the coarse level it is not.
The coarse level smoother is set some where else.

Thanks,
Eugenio

________________________________________
From: Barry Smith [bsmith at mcs.anl.gov]
Sent: Thursday, August 20, 2015 2:37 AM
To: Aulisa, Eugenio
Cc: petsc-users at mcs.anl.gov
Subject: Re: [petsc-users] GMRES -> PCMG -> PCASM pre- post- smoother

   What you describe is not the expected behavior. I expected exactly the result that you expected.

Do you perhaps have some PETSc options around that may be changing the post-smoother? On the command line or in the file petscrc or in the environmental variable PETSC_OPTIONS?  Can you send us some code that we could run that reproduces the problem?

  Barry

> On Aug 19, 2015, at 9:26 PM, Aulisa, Eugenio <eugenio.aulisa at ttu.edu> wrote:
>
> Hi,
>
> I am solving an iteration of
>
> GMRES -> PCMG -> PCASM
>
> where I build my particular ASM domain decomposition.
>
> In setting the PCMG I would like at each level
> to use the same pre- and post-smoother
> and for this reason I am using
> ...
> PCMGGetSmoother ( pcMG, level , &subksp );
>
> to extract and set at each level the ksp object.
>
> In setting PCASM then I use
> ...
> KSPGetPC ( subksp, &subpc );
> PCSetType ( subpc, PCASM );
> ...
> and then set my own decomposition
> ...
> PCASMSetLocalSubdomains(subpc,_is_loc_idx.size(),&_is_ovl[0],&_is_loc[0]);
> ...
>
> Now everything compiles, and runs with no memory leakage,
> but I do not get the expected convergence.
>
> When I checked the output of -ksp_view, I saw something that puzzled me:
> at each level >0, while in the MG pre-smoother the ASM domain decomposition
> is the one that I set, for example with 4 processes  I get
>
>>>>>>>>>>>>>>>>>>>>
> ...
> Down solver (pre-smoother) on level 2 -------------------------------
>    KSP Object:    (level-2)     4 MPI processes
>      type: gmres
>        GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement
>        GMRES: happy breakdown tolerance 1e-30
>      maximum iterations=1
>      using preconditioner applied to right hand side for initial guess
>      tolerances:  relative=1e-12, absolute=1e-20, divergence=1e+50
>      left preconditioning
>      using nonzero initial guess
>      using NONE norm type for convergence test
>    PC Object:    (level-2)     4 MPI processes
>      type: asm
>        Additive Schwarz: total subdomain blocks = 198, amount of overlap = 0
>        Additive Schwarz: restriction/interpolation type - RESTRICT
>        [0] number of local blocks = 52
>        [1] number of local blocks = 48
>        [2] number of local blocks = 48
>        [3] number of local blocks = 50
>        Local solve info for each block is in the following KSP and PC objects:
>        - - - - - - - - - - - - - - - - - -
> ...
>>>>>>>>>>>>
>
>
> in the post-smoother I have the default ASM  decomposition with overlapping 1:
>
>
>>>>>>>>>>>>
> ...
> Up solver (post-smoother) on level 2 -------------------------------
>    KSP Object:    (level-2)     4 MPI processes
>      type: gmres
>        GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement
>        GMRES: happy breakdown tolerance 1e-30
>      maximum iterations=2
>      tolerances:  relative=1e-12, absolute=1e-20, divergence=1e+50
>      left preconditioning
>      using nonzero initial guess
>      using NONE norm type for convergence test
>    PC Object:    (level-2)     4 MPI processes
>      type: asm
>        Additive Schwarz: total subdomain blocks = 4, amount of overlap = 1
>        Additive Schwarz: restriction/interpolation type - RESTRICT
>        Local solve is same for all blocks, in the following KSP and PC objects:
>      KSP Object:      (level-2sub_)       1 MPI processes
>        type: preonly
>        maximum iterations=10000, initial guess is zero
>        tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
>        left preconditioning
> ...
>>>>>>>>>>>>>>
> %%%%%%%%%%%%%%%%%%%%%%%%
>
> So it seams that by using
>
> PCMGGetSmoother ( pcMG, level , &subksp );
>
> I was capable  to set both the pre- and post- smoothers to be PCASM
> but everything I did after that applied only to the
> pre-smoother, while the post-smoother got the default PCASM options.
>
> I know that I can use
> PCMGGetSmootherDown and PCMGGetSmootherUp, but that would
> probably double the memory allocation and the computational time in the ASM.
>
> Is there any way I can just use PCMGGetSmoother
> and use the same PCASM in the pre- and post- smoother?
>
> I hope I was clear enough.
>
> Thanks a lot for your help,
> Eugenio
>
>
>



More information about the petsc-users mailing list