[petsc-users] Composite shell preconditiner
Alexander Grayver
agrayver at gfz-potsdam.de
Fri Aug 17 07:48:11 CDT 2012
Hello petsc team!
I would like to apply some sort of correction during Krylov process
every N iterations.
So I decided to use composite preconditioner like that:
-pc_type composite -pc_composite_pcs shell,sor/icc/gamg
I've started to test with the simplest shell preconditioner that does
nothing:
subroutine ShellPCApply(pc,x,y,ierr)
implicit none
#include "finclude/petscsys.h"
#include "finclude/petscvec.h"
PC,intent(in) :: pc
Vec,intent(in) :: x
Vec,intent(inout) :: y
call VecCopy(x,y,ierr);CHKERRQ(ierr)
endsubroutine ShellPCApply
And it turned out that this influences convergence by itself. In case:
-pc_type composite -pc_composite_pcs shell,sor
---->
Linear solve converged due to CONVERGED_RTOL iterations 125
whereas
-pc_type composite -pc_composite_pcs sor
----->
Linear solve converged due to CONVERGED_RTOL iterations 35
Why does it happen? What is the right way to go then?
Thanks in advance.
Just in case, here is -ksp_view for -pc_composite_pcs shell,sor :
PC Object:() 1 MPI processes
type: composite
Composite PC type - ADDITIVE
PCs on composite preconditioner follow
---------------------------------
PC Object: (sub_0_) 1 MPI processes
type: shell
Shell: no name
linear system matrix = precond matrix:
Matrix Object: 1 MPI processes
type: seqaij
rows=59049, cols=59049
total: nonzeros=741555, allocated nonzeros=767637
total number of mallocs used during MatSetValues calls =0
not using I-node routines
PC Object: (sub_1_) 1 MPI processes
type: sor
SOR: type = local_symmetric, iterations = 1, local iterations =
1, omega = 1
linear system matrix = precond matrix:
Matrix Object: 1 MPI processes
type: seqaij
rows=59049, cols=59049
total: nonzeros=741555, allocated nonzeros=767637
total number of mallocs used during MatSetValues calls =0
not using I-node routines
---------------------------------
linear system matrix = precond matrix:
Matrix Object: 1 MPI processes
type: seqaij
rows=59049, cols=59049
total: nonzeros=741555, allocated nonzeros=767637
total number of mallocs used during MatSetValues calls =0
not using I-node routines
--
Regards,
Alexander
More information about the petsc-users
mailing list