[petsc-users] Problem with ksp view
Michele Rosso
mrosso at uci.edu
Fri Jul 10 14:11:48 CDT 2015
Barry,
I made a mistake naming the files: the messy one is from the file while
the correct one is from command line. I am using a custom PC
(dmdarepart).
I attached 4 files:
ksp_view_cl_1proc.txt: output on command line for 1 process run
ksp_view_cl_8proc.txt: output on command line for 8 processes run
ksp_view_fl_1proc.txt: output on file for 1 process run
ksp_view_fl_8proc.txt: output on file for 8 processes run
This is what I do in the code:
CALL KSPSolve( ksp, b, x, ierr )
CALL PetscViewerASCIIOpen( PETSC_COMM_WORLD, ksp_view_file,
viewer, ierr )
CALL KSPView( ksp, viewer, ierr )
CALL PetscViewerDestroy( viewer, ierr )
I will try to provide a minimal example ASAP.
Michele
On Fri, 2015-07-10 at 13:43 -0500, Barry Smith wrote:
> Are you sure you named the files correctly and sent the right files. This is very confusing. What is in ksp_view_file.txt looks reasonable to me: You are running three level multigrid where the coarse solve is two level multigrid with a coarse solve of SuperLU_Dist. What is in ksp_view_command.txt is a jumbled mess which seems to be missing something.
>
> If you run both on one process what happens?
>
> Barry
>
> > On Jul 10, 2015, at 1:14 PM, Michele Rosso <mrosso at uci.edu> wrote:
> >
> > Hi,
> >
> > I am having a problem with KSPview. If I use the command line option, the output of KSPView is complete. If instead I call KSPView after KSPsolve and I have the ksp infos printed on file, the output misses some
> > parts (see attached files).
> > What can I do to fix this?
> >
> > Thanks,
> > Michele
> >
> > <ksp_view_commandline.txt><ksp_view_file.txt>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20150710/aee6a1c7/attachment-0001.html>
-------------- next part --------------
KSP Object: 1 MPI processes
type: cg
maximum iterations=10000
tolerances: relative=1e-09, absolute=1e-50, divergence=10000
left preconditioning
using nonzero initial guess
using UNPRECONDITIONED norm type for convergence test
PC Object: 1 MPI processes
type: mg
MG: type is MULTIPLICATIVE, levels=3 cycles=v
Cycles per PCApply=1
Using Galerkin computed coarse grid matrices
Coarse grid solver -- level -------------------------------
KSP Object: (mg_coarse_) 1 MPI processes
type: preonly
maximum iterations=1, initial guess is zero
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
using NONE norm type for convergence test
PC Object: (mg_coarse_) 1 MPI processes
type: dmdarepart
DMDARepart: parent comm size reduction factor = 2
DMDARepart: subcomm_size = 1
KSP Object: (mg_coarse_dmdarepart_) 1 MPI processes
type: richardson
Richardson: damping factor=1
maximum iterations=10000, initial guess is zero
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
using PRECONDITIONED norm type for convergence test
PC Object: (mg_coarse_dmdarepart_) 1 MPI processes
type: mg
MG: type is MULTIPLICATIVE, levels=2 cycles=v
Cycles per PCApply=1
Using Galerkin computed coarse grid matrices
Coarse grid solver -- level -------------------------------
KSP Object: (mg_coarse_dmdarepart_mg_coarse_) 1 MPI processes
type: preonly
maximum iterations=1, initial guess is zero
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
using NONE norm type for convergence test
PC Object: (mg_coarse_dmdarepart_mg_coarse_) 1 MPI processes
type: lu
LU: out-of-place factorization
tolerance for zero pivot 2.22045e-14
using diagonal shift on blocks to prevent zero pivot [INBLOCKS]
matrix ordering: nd
factor fill ratio given 0, needed 0
Factored matrix follows:
Mat Object: 1 MPI processes
type: seqaij
rows=8, cols=8
package used to perform factorization: superlu_dist
total: nonzeros=0, allocated nonzeros=0
total number of mallocs used during MatSetValues calls =0
SuperLU_DIST run parameters:
Process grid nprow 1 x npcol 1
Equilibrate matrix TRUE
Matrix input mode 0
Replace tiny pivots TRUE
Use iterative refinement FALSE
Processors in row 1 col partition 1
Row permutation LargeDiag
Column permutation METIS_AT_PLUS_A
Parallel symbolic factorization FALSE
Repeated factorization SamePattern_SameRowPerm
linear system matrix = precond matrix:
Mat Object: 1 MPI processes
type: seqaij
rows=8, cols=8
total: nonzeros=32, allocated nonzeros=32
total number of mallocs used during MatSetValues calls =0
not using I-node routines
Down solver (pre-smoother) on level 1 -------------------------------
KSP Object: (mg_coarse_dmdarepart_mg_levels_1_) 1 MPI processes
type: richardson
Richardson: damping factor=1
maximum iterations=2
tolerances: relative=0, absolute=1e-50, divergence=10000
left preconditioning
using nonzero initial guess
using NONE norm type for convergence test
PC Object: (mg_coarse_dmdarepart_mg_levels_1_) 1 MPI processes
type: sor
SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1
linear system matrix = precond matrix:
Mat Object: 1 MPI processes
type: seqaij
rows=64, cols=64
total: nonzeros=352, allocated nonzeros=352
total number of mallocs used during MatSetValues calls =0
not using I-node routines
Up solver (post-smoother) same as down solver (pre-smoother)
linear system matrix = precond matrix:
Mat Object: 1 MPI processes
type: seqaij
rows=64, cols=64
total: nonzeros=352, allocated nonzeros=352
total number of mallocs used during MatSetValues calls =0
not using I-node routines
linear system matrix = precond matrix:
Mat Object: 1 MPI processes
type: seqaij
rows=64, cols=64
total: nonzeros=352, allocated nonzeros=352
total number of mallocs used during MatSetValues calls =0
not using I-node routines
Down solver (pre-smoother) on level 1 -------------------------------
KSP Object: (mg_levels_1_) 1 MPI processes
type: richardson
Richardson: damping factor=1
maximum iterations=1
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
using nonzero initial guess
using NONE norm type for convergence test
PC Object: (mg_levels_1_) 1 MPI processes
type: sor
SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1
linear system matrix = precond matrix:
Mat Object: 1 MPI processes
type: seqaij
rows=512, cols=512
total: nonzeros=3200, allocated nonzeros=3200
total number of mallocs used during MatSetValues calls =0
not using I-node routines
Up solver (post-smoother) same as down solver (pre-smoother)
Down solver (pre-smoother) on level 2 -------------------------------
KSP Object: (mg_levels_2_) 1 MPI processes
type: richardson
Richardson: damping factor=1
maximum iterations=1
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
using nonzero initial guess
using NONE norm type for convergence test
PC Object: (mg_levels_2_) 1 MPI processes
type: sor
SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1
linear system matrix = precond matrix:
Mat Object: 1 MPI processes
type: seqaij
rows=4096, cols=4096
total: nonzeros=27136, allocated nonzeros=27136
total number of mallocs used during MatSetValues calls =0
not using I-node routines
Up solver (post-smoother) same as down solver (pre-smoother)
linear system matrix = precond matrix:
Mat Object: 1 MPI processes
type: seqaij
rows=4096, cols=4096
total: nonzeros=27136, allocated nonzeros=27136
total number of mallocs used during MatSetValues calls =0
not using I-node routines
#PETSc Option Table entries:
-ksp_initial_guess_nonzero yes
-ksp_norm_type unpreconditioned
-ksp_rtol 1e-9
-ksp_type cg
-ksp_view
-log_summary log_summary.txt
-mg_coarse_dmdarepart_ksp_type richardson
-mg_coarse_dmdarepart_mg_coarse_pc_factor_mat_solver_package superlu_dist
-mg_coarse_dmdarepart_mg_coarse_pc_type lu
-mg_coarse_dmdarepart_mg_levels_ksp_type richardson
-mg_coarse_dmdarepart_pc_mg_galerkin
-mg_coarse_dmdarepart_pc_mg_levels 2
-mg_coarse_dmdarepart_pc_type mg
-mg_coarse_ksp_type preonly
-mg_coarse_pc_dmdarepart_factor 2
-mg_coarse_pc_type dmdarepart
-mg_levels_ksp_max_it 1
-mg_levels_ksp_type richardson
-nx 16
-ny 16
-nz 16
-options_left
-pc_mg_galerkin
-pc_mg_levels 3
-pc_type mg
-px 1
-py 1
-pz 1
#End of PETSc Option Table entries
There are 6 unused database options. They are:
Option left: name:-nx value: 16
Option left: name:-ny value: 16
Option left: name:-nz value: 16
Option left: name:-px value: 1
Option left: name:-py value: 1
Option left: name:-pz value: 1
-------------- next part --------------
KSP Object: 8 MPI processes
type: cg
maximum iterations=10000
tolerances: relative=1e-09, absolute=1e-50, divergence=10000
left preconditioning
using nonzero initial guess
using UNPRECONDITIONED norm type for convergence test
PC Object: 8 MPI processes
type: mg
MG: type is MULTIPLICATIVE, levels=3 cycles=v
Cycles per PCApply=1
Using Galerkin computed coarse grid matrices
Coarse grid solver -- level -------------------------------
KSP Object: (mg_coarse_) 8 MPI processes
type: preonly
maximum iterations=1, initial guess is zero
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
using NONE norm type for convergence test
PC Object: (mg_coarse_) 8 MPI processes
type: dmdarepart
DMDARepart: parent comm size reduction factor = 2
DMDARepart: subcomm_size = 4
KSP Object: (mg_coarse_dmdarepart_) 4 MPI processes
type: richardson
Richardson: damping factor=1
maximum iterations=10000, initial guess is zero
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
using PRECONDITIONED norm type for convergence test
PC Object: (mg_coarse_dmdarepart_) 4 MPI processes
type: mg
MG: type is MULTIPLICATIVE, levels=2 cycles=v
Cycles per PCApply=1
Using Galerkin computed coarse grid matrices
Coarse grid solver -- level -------------------------------
KSP Object: (mg_coarse_dmdarepart_mg_coarse_) 4 MPI processes
type: preonly
maximum iterations=1, initial guess is zero
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
using NONE norm type for convergence test
PC Object: (mg_coarse_dmdarepart_mg_coarse_) 4 MPI processes
type: lu
LU: out-of-place factorization
tolerance for zero pivot 2.22045e-14
matrix ordering: natural
factor fill ratio given 0, needed 0
Factored matrix follows:
Mat Object: 4 MPI processes
type: mpiaij
rows=8, cols=8
package used to perform factorization: superlu_dist
total: nonzeros=0, allocated nonzeros=0
total number of mallocs used during MatSetValues calls =0
SuperLU_DIST run parameters:
Process grid nprow 2 x npcol 2
Equilibrate matrix TRUE
Matrix input mode 1
Replace tiny pivots TRUE
Use iterative refinement FALSE
Processors in row 2 col partition 2
Row permutation LargeDiag
Column permutation METIS_AT_PLUS_A
Parallel symbolic factorization FALSE
Repeated factorization SamePattern_SameRowPerm
linear system matrix = precond matrix:
Mat Object: 4 MPI processes
type: mpiaij
rows=8, cols=8
total: nonzeros=32, allocated nonzeros=32
total number of mallocs used during MatSetValues calls =0
using I-node (on process 0) routines: found 1 nodes, limit used is 5
Down solver (pre-smoother) on level 1 -------------------------------
KSP Object: (mg_coarse_dmdarepart_mg_levels_1_) 4 MPI processes
type: richardson
Richardson: damping factor=1
maximum iterations=2
tolerances: relative=0, absolute=1e-50, divergence=10000
left preconditioning
using nonzero initial guess
using NONE norm type for convergence test
PC Object: (mg_coarse_dmdarepart_mg_levels_1_) 4 MPI processes
type: sor
SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1
linear system matrix = precond matrix:
Mat Object: 4 MPI processes
type: mpiaij
rows=64, cols=64
total: nonzeros=352, allocated nonzeros=352
total number of mallocs used during MatSetValues calls =0
not using I-node (on process 0) routines
Up solver (post-smoother) same as down solver (pre-smoother)
linear system matrix = precond matrix:
Mat Object: 4 MPI processes
type: mpiaij
rows=64, cols=64
total: nonzeros=352, allocated nonzeros=352
total number of mallocs used during MatSetValues calls =0
not using I-node (on process 0) routines
linear system matrix = precond matrix:
Mat Object: 8 MPI processes
type: mpiaij
rows=64, cols=64
total: nonzeros=352, allocated nonzeros=352
total number of mallocs used during MatSetValues calls =0
not using I-node (on process 0) routines
Down solver (pre-smoother) on level 1 -------------------------------
KSP Object: (mg_levels_1_) 8 MPI processes
type: richardson
Richardson: damping factor=1
maximum iterations=1
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
using nonzero initial guess
using NONE norm type for convergence test
PC Object: (mg_levels_1_) 8 MPI processes
type: sor
SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1
linear system matrix = precond matrix:
Mat Object: 8 MPI processes
type: mpiaij
rows=512, cols=512
total: nonzeros=3200, allocated nonzeros=3200
total number of mallocs used during MatSetValues calls =0
not using I-node (on process 0) routines
Up solver (post-smoother) same as down solver (pre-smoother)
Down solver (pre-smoother) on level 2 -------------------------------
KSP Object: (mg_levels_2_) 8 MPI processes
type: richardson
Richardson: damping factor=1
maximum iterations=1
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
using nonzero initial guess
using NONE norm type for convergence test
PC Object: (mg_levels_2_) 8 MPI processes
type: sor
SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1
linear system matrix = precond matrix:
Mat Object: 8 MPI processes
type: mpiaij
rows=4096, cols=4096
total: nonzeros=27136, allocated nonzeros=27136
total number of mallocs used during MatSetValues calls =0
Up solver (post-smoother) same as down solver (pre-smoother)
linear system matrix = precond matrix:
Mat Object: 8 MPI processes
type: mpiaij
rows=4096, cols=4096
total: nonzeros=27136, allocated nonzeros=27136
total number of mallocs used during MatSetValues calls =0
#PETSc Option Table entries:
-ksp_initial_guess_nonzero yes
-ksp_norm_type unpreconditioned
-ksp_rtol 1e-9
-ksp_type cg
-ksp_view
-log_summary log_summary.txt
-mg_coarse_dmdarepart_ksp_type richardson
-mg_coarse_dmdarepart_mg_coarse_pc_factor_mat_solver_package superlu_dist
-mg_coarse_dmdarepart_mg_coarse_pc_type lu
-mg_coarse_dmdarepart_mg_levels_ksp_type richardson
-mg_coarse_dmdarepart_pc_mg_galerkin
-mg_coarse_dmdarepart_pc_mg_levels 2
-mg_coarse_dmdarepart_pc_type mg
-mg_coarse_ksp_type preonly
-mg_coarse_pc_dmdarepart_factor 2
-mg_coarse_pc_type dmdarepart
-mg_levels_ksp_max_it 1
-mg_levels_ksp_type richardson
-nx 16
-ny 16
-nz 16
-options_left
-pc_mg_galerkin
-pc_mg_levels 3
-pc_type mg
-px 2
-py 2
-pz 2
#End of PETSc Option Table entries
There are 6 unused database options. They are:
Option left: name:-nx value: 16
Option left: name:-ny value: 16
Option left: name:-nz value: 16
Option left: name:-px value: 2
Option left: name:-py value: 2
Option left: name:-pz value: 2
-------------- next part --------------
KSP Object: 1 MPI processes
type: cg
maximum iterations=10000
tolerances: relative=1e-09, absolute=1e-50, divergence=10000
left preconditioning
using nonzero initial guess
using UNPRECONDITIONED norm type for convergence test
PC Object: 1 MPI processes
type: mg
MG: type is MULTIPLICATIVE, levels=3 cycles=v
Cycles per PCApply=1
Using Galerkin computed coarse grid matrices
Coarse grid solver -- level -------------------------------
KSP Object: (mg_coarse_) 1 MPI processes
type: preonly
maximum iterations=1, initial guess is zero
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
using NONE norm type for convergence test
PC Object: (mg_coarse_) 1 MPI processes
type: dmdarepart
DMDARepart: parent comm size reduction factor = 2
DMDARepart: subcomm_size = 1
KSP Object: (mg_coarse_dmdarepart_) 1 MPI processes
type: richardson
Richardson: damping factor=1
maximum iterations=10000, initial guess is zero
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
using PRECONDITIONED norm type for convergence test
PC Object: (mg_coarse_dmdarepart_) 1 MPI processes
type: mg
MG: type is MULTIPLICATIVE, levels=2 cycles=v
Cycles per PCApply=1
Using Galerkin computed coarse grid matrices
Coarse grid solver -- level -------------------------------
KSP Object: (mg_coarse_dmdarepart_mg_coarse_) 1 MPI processes
type: preonly
maximum iterations=1, initial guess is zero
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
using NONE norm type for convergence test
PC Object: (mg_coarse_dmdarepart_mg_coarse_) 1 MPI processes
type: lu
LU: out-of-place factorization
tolerance for zero pivot 2.22045e-14
using diagonal shift on blocks to prevent zero pivot [INBLOCKS]
matrix ordering: nd
factor fill ratio given 0, needed 0
Factored matrix follows:
Mat Object: 1 MPI processes
type: seqaij
rows=8, cols=8
package used to perform factorization: superlu_dist
total: nonzeros=0, allocated nonzeros=0
total number of mallocs used during MatSetValues calls =0
SuperLU_DIST run parameters:
Process grid nprow 1 x npcol 1
Equilibrate matrix TRUE
Matrix input mode 0
Replace tiny pivots TRUE
Use iterative refinement FALSE
Processors in row 1 col partition 1
Row permutation LargeDiag
Column permutation METIS_AT_PLUS_A
Parallel symbolic factorization FALSE
Repeated factorization SamePattern_SameRowPerm
linear system matrix = precond matrix:
Mat Object: 1 MPI processes
type: seqaij
rows=8, cols=8
total: nonzeros=32, allocated nonzeros=32
total number of mallocs used during MatSetValues calls =0
not using I-node routines
Down solver (pre-smoother) on level 1 -------------------------------
KSP Object: (mg_coarse_dmdarepart_mg_levels_1_) 1 MPI processes
type: richardson
Richardson: damping factor=1
maximum iterations=2
tolerances: relative=0, absolute=1e-50, divergence=10000
left preconditioning
using nonzero initial guess
using NONE norm type for convergence test
PC Object: (mg_coarse_dmdarepart_mg_levels_1_) 1 MPI processes
type: sor
SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1
linear system matrix = precond matrix:
Mat Object: 1 MPI processes
type: seqaij
rows=64, cols=64
total: nonzeros=352, allocated nonzeros=352
total number of mallocs used during MatSetValues calls =0
not using I-node routines
Up solver (post-smoother) same as down solver (pre-smoother)
linear system matrix = precond matrix:
Mat Object: 1 MPI processes
type: seqaij
rows=64, cols=64
total: nonzeros=352, allocated nonzeros=352
total number of mallocs used during MatSetValues calls =0
not using I-node routines
linear system matrix = precond matrix:
Mat Object: 1 MPI processes
type: seqaij
rows=64, cols=64
total: nonzeros=352, allocated nonzeros=352
total number of mallocs used during MatSetValues calls =0
not using I-node routines
Down solver (pre-smoother) on level 1 -------------------------------
KSP Object: (mg_levels_1_) 1 MPI processes
type: richardson
Richardson: damping factor=1
maximum iterations=1
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
using nonzero initial guess
using NONE norm type for convergence test
PC Object: (mg_levels_1_) 1 MPI processes
type: sor
SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1
linear system matrix = precond matrix:
Mat Object: 1 MPI processes
type: seqaij
rows=512, cols=512
total: nonzeros=3200, allocated nonzeros=3200
total number of mallocs used during MatSetValues calls =0
not using I-node routines
Up solver (post-smoother) same as down solver (pre-smoother)
Down solver (pre-smoother) on level 2 -------------------------------
KSP Object: (mg_levels_2_) 1 MPI processes
type: richardson
Richardson: damping factor=1
maximum iterations=1
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
using nonzero initial guess
using NONE norm type for convergence test
PC Object: (mg_levels_2_) 1 MPI processes
type: sor
SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1
linear system matrix = precond matrix:
Mat Object: 1 MPI processes
type: seqaij
rows=4096, cols=4096
total: nonzeros=27136, allocated nonzeros=27136
total number of mallocs used during MatSetValues calls =0
not using I-node routines
Up solver (post-smoother) same as down solver (pre-smoother)
linear system matrix = precond matrix:
Mat Object: 1 MPI processes
type: seqaij
rows=4096, cols=4096
total: nonzeros=27136, allocated nonzeros=27136
total number of mallocs used during MatSetValues calls =0
not using I-node routines
-------------- next part --------------
KSP Object: (mg_coarse_dmdarepart_) 4 MPI processes
type: richardson
Richardson: damping factor=1
maximum iterations=10000, initial guess is zero
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
using PRECONDITIONED norm type for convergence test
PC Object: (mg_coarse_dmdarepart_) 4 MPI processes
type: mg
MG: type is MULTIPLICATIVE, levels=2 cycles=v
Cycles per PCApply=1
Using Galerkin computed coarse grid matrices
Coarse grid solver -- level -------------------------------
KSP Object: (mg_coarse_dmdarepart_mg_coarse_) 4 MPI processes
type: preonly
maximum iterations=1, initial guess is zero
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
linear system matrix = precond matrix:
Mat Object: 8 MPI processes
type: mpiaij
rows=64, cols=64
total: nonzeros=352, allocated nonzeros=352
total number of mallocs used during MatSetValues calls =0
not using I-node (on process 0) routines
Down solver (pre-smoother) on level 1 -------------------------------
KSP Object: (mg_levels_1_) 8 MPI processes
type: richardson
Richardson: damping factor=1
maximum iterations=1
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
using nonzero initial guess
using NONE norm type for convergence test
PC Object: (mg_levels_1_) 8 MPI processes
type: sor
SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1
linear system matrix = precond matrix:
Mat Object: 8 MPI processes
type: mpiaij
rows=512, cols=512
total: nonzeros=3200, allocated nonzeros=3200
total number of mallocs used during MatSetValues calls =0
not using I-node (on process 0) routines
Up solver (post-smoother) same as down solver (pre-smoother)
Down solver (pre-smoother) on level 2 -------------------------------
KSP Object: (mg_levels_2_) 8 MPI processes
type: richardson
Richardson: damping factor=1
maximum iterations=1
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
using nonzero initial guess
using NONE norm type for convergence test
PC Object: (mg_levels_2_) 8 MPI processes
type: sor
SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1
linear system matrix = precond matrix:
Mat Object: 8 MPI processes
type: mpiaij
rows=4096, cols=4096
total: nonzeros=27136, allocated nonzeros=27136
total number of mallocs used during MatSetValues calls =0
Up solver (post-smoother) same as down solver (pre-smoother)
linear system matrix = precond matrix:
Mat Object: 8 MPI processes
type: mpiaij
rows=4096, cols=4096
total: nonzeros=27136, allocated nonzeros=27136
total number of mallocs used during MatSetValues calls =0
ses
type: sor
SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1
linear system matrix = precond matrix:
Mat Object: 4 MPI processes
type: mpiaij
rows=64, cols=64
total: nonzeros=352, allocated nonzeros=352
total number of mallocs used during MatSetValues calls =0
not using I-node (on process 0) routines
Up solver (post-smoother) same as down solver (pre-smoother)
linear system matrix = precond matrix:
Mat Object: 4 MPI processes
type: mpiaij
rows=64, cols=64
total: nonzeros=352, allocated nonzeros=352
total number of mallocs used during MatSetValues calls =0
not using I-node (on process 0) routines
More information about the petsc-users
mailing list