[petsc-users] GMRES -> PCMG -> PCASM pre- post- smoother
Barry Smith
bsmith at mcs.anl.gov
Thu Aug 20 02:37:13 CDT 2015
What you describe is not the expected behavior. I expected exactly the result that you expected.
Do you perhaps have some PETSc options around that may be changing the post-smoother? On the command line or in the file petscrc or in the environmental variable PETSC_OPTIONS? Can you send us some code that we could run that reproduces the problem?
Barry
> On Aug 19, 2015, at 9:26 PM, Aulisa, Eugenio <eugenio.aulisa at ttu.edu> wrote:
>
> Hi,
>
> I am solving an iteration of
>
> GMRES -> PCMG -> PCASM
>
> where I build my particular ASM domain decomposition.
>
> In setting the PCMG I would like at each level
> to use the same pre- and post-smoother
> and for this reason I am using
> ...
> PCMGGetSmoother ( pcMG, level , &subksp );
>
> to extract and set at each level the ksp object.
>
> In setting PCASM then I use
> ...
> KSPGetPC ( subksp, &subpc );
> PCSetType ( subpc, PCASM );
> ...
> and then set my own decomposition
> ...
> PCASMSetLocalSubdomains(subpc,_is_loc_idx.size(),&_is_ovl[0],&_is_loc[0]);
> ...
>
> Now everything compiles, and runs with no memory leakage,
> but I do not get the expected convergence.
>
> When I checked the output of -ksp_view, I saw something that puzzled me:
> at each level >0, while in the MG pre-smoother the ASM domain decomposition
> is the one that I set, for example with 4 processes I get
>
>>>>>>>>>>>>>>>>>>>>
> ...
> Down solver (pre-smoother) on level 2 -------------------------------
> KSP Object: (level-2) 4 MPI processes
> type: gmres
> GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement
> GMRES: happy breakdown tolerance 1e-30
> maximum iterations=1
> using preconditioner applied to right hand side for initial guess
> tolerances: relative=1e-12, absolute=1e-20, divergence=1e+50
> left preconditioning
> using nonzero initial guess
> using NONE norm type for convergence test
> PC Object: (level-2) 4 MPI processes
> type: asm
> Additive Schwarz: total subdomain blocks = 198, amount of overlap = 0
> Additive Schwarz: restriction/interpolation type - RESTRICT
> [0] number of local blocks = 52
> [1] number of local blocks = 48
> [2] number of local blocks = 48
> [3] number of local blocks = 50
> Local solve info for each block is in the following KSP and PC objects:
> - - - - - - - - - - - - - - - - - -
> ...
>>>>>>>>>>>>
>
>
> in the post-smoother I have the default ASM decomposition with overlapping 1:
>
>
>>>>>>>>>>>>
> ...
> Up solver (post-smoother) on level 2 -------------------------------
> KSP Object: (level-2) 4 MPI processes
> type: gmres
> GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement
> GMRES: happy breakdown tolerance 1e-30
> maximum iterations=2
> tolerances: relative=1e-12, absolute=1e-20, divergence=1e+50
> left preconditioning
> using nonzero initial guess
> using NONE norm type for convergence test
> PC Object: (level-2) 4 MPI processes
> type: asm
> Additive Schwarz: total subdomain blocks = 4, amount of overlap = 1
> Additive Schwarz: restriction/interpolation type - RESTRICT
> Local solve is same for all blocks, in the following KSP and PC objects:
> KSP Object: (level-2sub_) 1 MPI processes
> type: preonly
> maximum iterations=10000, initial guess is zero
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000
> left preconditioning
> ...
>>>>>>>>>>>>>>
> %%%%%%%%%%%%%%%%%%%%%%%%
>
> So it seams that by using
>
> PCMGGetSmoother ( pcMG, level , &subksp );
>
> I was capable to set both the pre- and post- smoothers to be PCASM
> but everything I did after that applied only to the
> pre-smoother, while the post-smoother got the default PCASM options.
>
> I know that I can use
> PCMGGetSmootherDown and PCMGGetSmootherUp, but that would
> probably double the memory allocation and the computational time in the ASM.
>
> Is there any way I can just use PCMGGetSmoother
> and use the same PCASM in the pre- and post- smoother?
>
> I hope I was clear enough.
>
> Thanks a lot for your help,
> Eugenio
>
>
>
More information about the petsc-users
mailing list