understanding the output from -info

Ben Tay zonexo at gmail.com
Thu Feb 8 09:47:24 CST 2007


Hi,

i'm trying to solve my cfd code using PETSc in parallel. Besides the linear
eqns for PETSc, other parts of the code has also been parallelized using
MPI.

however i find that the parallel version of the code running on 4 processors
is even slower than the sequential version.

in order to find out why, i've used the -info option to print out the
details. there are 2 linear equations being solved - momentum and poisson.
the momentum one is twice the size of the poisson. it is shown below:

[0] User provided function(): (Fortran):PETSc successfully started: procs 4
[1] User provided function(): (Fortran):PETSc successfully started: procs 4
[3] User provided function(): (Fortran):PETSc successfully started: procs 4
[2] User provided function(): (Fortran):PETSc successfully started: procs 4
[0] PetscGetHostName(): Rejecting domainname, likely is NIS
atlas2-c12.(none)
[0] User provided function(): Running on machine: atlas2-c12
[1] PetscGetHostName(): Rejecting domainname, likely is NIS
atlas2-c12.(none)
[1] User provided function(): Running on machine: atlas2-c12
[3] PetscGetHostName(): Rejecting domainname, likely is NIS
atlas2-c08.(none)
[3] User provided function(): Running on machine: atlas2-c08
[2] PetscGetHostName(): Rejecting domainname, likely is NIS
atlas2-c08.(none)
[2] User provided function(): Running on machine: atlas2-c08
[0] PetscCommDuplicate(): Duplicating a communicator 91 141 max tags =
1073741823
[1] PetscCommDuplicate(): Duplicating a communicator 91 141 max tags =
1073741823
[2] PetscCommDuplicate(): Duplicating a communicator 91 141 max tags =
1073741823
[3] PetscCommDuplicate(): Duplicating a communicator 91 141 max tags =
1073741823
[0] PetscCommDuplicate(): Duplicating a communicator 92 143 max tags =
1073741823
[2] PetscCommDuplicate(): Duplicating a communicator 92 143 max tags =
1073741823
[0] PetscCommDuplicate(): Using internal PETSc communicator 92 143
[2] PetscCommDuplicate(): Using internal PETSc communicator 92 143
[2] PetscCommDuplicate(): Using internal PETSc communicator 91 141
[0] PetscCommDuplicate(): Using internal PETSc communicator 91 141
[1] PetscCommDuplicate(): Duplicating a communicator 92 143 max tags =
1073741823
[3] PetscCommDuplicate(): Duplicating a communicator 92 143 max tags =
1073741823
[1] PetscCommDuplicate(): Using internal PETSc communicator 92 143
[3] PetscCommDuplicate(): Using internal PETSc communicator 92 143
[1] PetscCommDuplicate(): Using internal PETSc communicator 91 141
[3] PetscCommDuplicate(): Using internal PETSc communicator 91 141
[0] PetscCommDuplicate(): Using internal PETSc communicator 92 143
[0] PetscCommDuplicate(): Using internal PETSc communicator 92 143
           0        3200
[1] PetscCommDuplicate(): Using internal PETSc communicator 92 143
[1] PetscCommDuplicate(): Using internal PETSc communicator 92 143
        3200        6400
[2] PetscCommDuplicate(): Using internal PETSc communicator 92 143
[2] PetscCommDuplicate(): Using internal PETSc communicator 92 143
        6400        9600
[3] PetscCommDuplicate(): Using internal PETSc communicator 92 143
[3] PetscCommDuplicate(): Using internal PETSc communicator 92 143
        9600       12800
[3] PetscCommDuplicate(): Using internal PETSc communicator 91 141
[0] PetscCommDuplicate(): Using internal PETSc communicator 91 141
[1] PetscCommDuplicate(): Using internal PETSc communicator 91 141
[2] PetscCommDuplicate(): Using internal PETSc communicator 91 141
[2] PetscCommDuplicate(): Using internal PETSc communicator 91 141
[0] PetscCommDuplicate(): Using internal PETSc communicator 91 141
[3] PetscCommDuplicate(): Using internal PETSc communicator 91 141
[1] PetscCommDuplicate(): Using internal PETSc communicator 91 141
[1] PetscCommDuplicate(): Using internal PETSc communicator 91 141
[1] PetscCommDuplicate(): Using internal PETSc communicator 91 141
[1] PetscCommDuplicate(): Using internal PETSc communicator 91 141
[3] PetscCommDuplicate(): Using internal PETSc communicator 91 141
[1] PetscCommDuplicate(): Using internal PETSc communicator 91 141
[3] PetscCommDuplicate(): Using internal PETSc communicator 91 141
        3200        6400
[3] PetscCommDuplicate(): Using internal PETSc communicator 91 141
[3] PetscCommDuplicate(): Using internal PETSc communicator 91 141
[3] PetscCommDuplicate(): Using internal PETSc communicator 91 141
[3] PetscCommDuplicate(): Using internal PETSc communicator 91 141
        9600       12800
[0] PetscCommDuplicate(): Using internal PETSc communicator 91 141
[2] PetscCommDuplicate(): Using internal PETSc communicator 91 141
[0] PetscCommDuplicate(): Using internal PETSc communicator 91 141
[2] PetscCommDuplicate(): Using internal PETSc communicator 91 141
[0] PetscCommDuplicate(): Using internal PETSc communicator 91 141
[2] PetscCommDuplicate(): Using internal PETSc communicator 91 141
[0] PetscCommDuplicate(): Using internal PETSc communicator 91 141
[2] PetscCommDuplicate(): Using internal PETSc communicator 91 141
           0        3200
        6400        9600
[1] MatStashScatterBegin_Private(): No of messages: 0
[1] MatAssemblyBegin_MPIAIJ(): Stash has 0 entries, uses 0 mallocs.
[3] MatStashScatterBegin_Private(): No of messages: 0
[3] MatAssemblyBegin_MPIAIJ(): Stash has 0 entries, uses 0 mallocs.
[3] MatAssemblyEnd_SeqAIJ(): Matrix size: 3200 X 3200; storage space: 4064
unneeded,53536 used
[1] MatAssemblyEnd_SeqAIJ(): Matrix size: 3200 X 3200; storage space: 4064
unneeded,53536 used
[3] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0
[3] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 18
[1] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0
[1] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 18
[3] Mat_CheckInode(): Found 1600 nodes of 3200. Limit used: 5. Using Inode
routines
[1] Mat_CheckInode(): Found 1600 nodes of 3200. Limit used: 5. Using Inode
routines
[0] MatStashScatterBegin_Private(): No of messages: 0
[0] MatAssemblyBegin_MPIAIJ(): Stash has 0 entries, uses 0 mallocs.
[2] MatStashScatterBegin_Private(): No of messages: 0
[2] MatAssemblyBegin_MPIAIJ(): Stash has 0 entries, uses 0 mallocs.
[2] MatAssemblyEnd_SeqAIJ(): Matrix size: 3200 X 3200; storage space: 4064
unneeded,53536 used
[2] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0
[2] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 18
[0] MatAssemblyEnd_SeqAIJ(): Matrix size: 3200 X 3200; storage space: 3120
unneeded,54480 used
[0] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0
[0] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 18
[2] Mat_CheckInode(): Found 1600 nodes of 3200. Limit used: 5. Using Inode
routines
[0] Mat_CheckInode(): Found 1600 nodes of 3200. Limit used: 5. Using Inode
routines
[0] PetscCommDuplicate(): Using internal PETSc communicator 92 143
[0] MatSetUpMultiply_MPIAIJ(): Using block index set to define scatter
[2] PetscCommDuplicate(): Using internal PETSc communicator 92 143
[3] PetscCommDuplicate(): Using internal PETSc communicator 92 143
[1] PetscCommDuplicate(): Using internal PETSc communicator 92 143
[0] PetscCommDuplicate(): Using internal PETSc communicator 92 143
[1] PetscCommDuplicate(): Using internal PETSc communicator 92 143
[2] PetscCommDuplicate(): Using internal PETSc communicator 92 143
[3] PetscCommDuplicate(): Using internal PETSc communicator 92 143
[1] VecScatterCreateCommon_PtoS(): Using blocksize 1 scatter
[1] MatSetOption_Inode(): Not using Inode routines due to
MatSetOption(MAT_DO_NOT_USE_INODES
[1] MatAssemblyEnd_SeqAIJ(): Matrix size: 3200 X 640; storage space: 53776
unneeded,3824 used
[1] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0
[1] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 6
[1] Mat_CheckCompressedRow(): Found the ratio (num_zerorows
2560)/(num_localrows 3200) > 0.6. Use CompressedRow routines.
[0] VecScatterCreateCommon_PtoS(): Using blocksize 1 scatter
[0] VecScatterCreate(): General case: MPI to Seq
[0] MatSetOption_Inode(): Not using Inode routines due to
MatSetOption(MAT_DO_NOT_USE_INODES
[0] MatAssemblyEnd_SeqAIJ(): Matrix size: 3200 X 320; storage space: 55688
unneeded,1912 used
[0] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0
[0] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 6
[0] Mat_CheckCompressedRow(): Found the ratio (num_zerorows
2880)/(num_localrows 3200) > 0.6. Use CompressedRow routines.
[0] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 6
[0] Mat_CheckCompressedRow(): Found the ratio (num_zerorows
2880)/(num_localrows 3200) > 0.6. Use CompressedRow routines.
[3] VecScatterCreateCommon_PtoS(): Using blocksize 1 scatter
[3] MatSetOption_Inode(): Not using Inode routines due to
MatSetOption(MAT_DO_NOT_USE_INODES
[3] MatAssemblyEnd_SeqAIJ(): Matrix size: 3200 X 320; storage space: 55688
unneeded,1912 used
[3] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0
[3] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 6
[3] Mat_CheckCompressedRow(): Found the ratio (num_zerorows
2880)/(num_localrows 3200) > 0.6. Use CompressedRow routines.
[2] VecScatterCreateCommon_PtoS(): Using blocksize 1 scatter
[2] MatSetOption_Inode(): Not using Inode routines due to
MatSetOption(MAT_DO_NOT_USE_INODES
[2] MatAssemblyEnd_SeqAIJ(): Matrix size: 3200 X 640; storage space: 53776
unneeded,3824 used
[2] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0
[2] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 6
[2] Mat_CheckCompressedRow(): Found the ratio (num_zerorows
2560)/(num_localrows 3200) > 0.6. Use CompressedRow routines.
[0] VecAssemblyBegin_MPI(): Stash has 0 entries, uses 0 mallocs.
[0] VecAssemblyBegin_MPI(): Block-Stash has 0 entries, uses 0 mallocs.
[2] VecAssemblyBegin_MPI(): Stash has 0 entries, uses 0 mallocs.
[2] VecAssemblyBegin_MPI(): Block-Stash has 0 entries, uses 0 mallocs.
[1] VecAssemblyBegin_MPI(): Stash has 0 entries, uses 0 mallocs.
[1] VecAssemblyBegin_MPI(): Block-Stash has 0 entries, uses 0 mallocs.
[3] VecAssemblyBegin_MPI(): Stash has 0 entries, uses 0 mallocs.
[3] VecAssemblyBegin_MPI(): Block-Stash has 0 entries, uses 0 mallocs.
[1] VecAssemblyBegin_MPI(): Stash has 0 entries, uses 0 mallocs.
[1] VecAssemblyBegin_MPI(): Block-Stash has 0 entries, uses 0 mallocs.
[3] VecAssemblyBegin_MPI(): Stash has 0 entries, uses 0 mallocs.
[3] VecAssemblyBegin_MPI(): Block-Stash has 0 entries, uses 0 mallocs.
[2] VecAssemblyBegin_MPI(): Stash has 0 entries, uses 0 mallocs.
[2] VecAssemblyBegin_MPI(): Block-Stash has 0 entries, uses 0 mallocs.
[0] VecAssemblyBegin_MPI(): Stash has 0 entries, uses 0 mallocs.
[0] VecAssemblyBegin_MPI(): Block-Stash has 0 entries, uses 0 mallocs.
[0] PCSetUp(): Setting up new PC
[0] PetscCommDuplicate(): Using internal PETSc communicator 92 143
[2] PetscCommDuplicate(): Using internal PETSc communicator 92 143
[0] PetscCommDuplicate(): Using internal PETSc communicator 92 143
[2] PetscCommDuplicate(): Using internal PETSc communicator 92 143
[2] PetscCommDuplicate(): Using internal PETSc communicator 92 143
[2] PetscCommDuplicate(): Using internal PETSc communicator 92 143
[0] PetscCommDuplicate(): Using internal PETSc communicator 92 143
[0] PetscCommDuplicate(): Using internal PETSc communicator 92 143
[2] PCSetUp(): Setting up new PC
[2] PetscCommDuplicate(): Using internal PETSc communicator 92 143
[2] PetscCommDuplicate(): Using internal PETSc communicator 92 143
[0] PCSetUp(): Setting up new PC
[2] PetscCommDuplicate(): Using internal PETSc communicator 92 143
[2] PetscCommDuplicate(): Using internal PETSc communicator 92 143
[0] PetscCommDuplicate(): Using internal PETSc communicator 92 143
[0] PetscCommDuplicate(): Using internal PETSc communicator 92 143
[2] PetscCommDuplicate(): Using internal PETSc communicator 92 143
[0] PetscCommDuplicate(): Using internal PETSc communicator 92 143
[0] PetscCommDuplicate(): Using internal PETSc communicator 92 143
[0] PetscCommDuplicate(): Using internal PETSc communicator 92 143
[1] PetscCommDuplicate(): Using internal PETSc communicator 92 143
[3] PetscCommDuplicate(): Using internal PETSc communicator 92 143
[1] PetscCommDuplicate(): Using internal PETSc communicator 92 143
[3] PetscCommDuplicate(): Using internal PETSc communicator 92 143
[3] PetscCommDuplicate(): Using internal PETSc communicator 92 143
[3] PetscCommDuplicate(): Using internal PETSc communicator 92 143
[1] PetscCommDuplicate(): Using internal PETSc communicator 92 143
[1] PetscCommDuplicate(): Using internal PETSc communicator 92 143
[1] PetscCommDuplicate(): Using internal PETSc communicator 92 143
[1] PetscCommDuplicate(): Using internal PETSc communicator 92 143
[3] PCSetUp(): Setting up new PC
[3] PetscCommDuplicate(): Using internal PETSc communicator 92 143
[3] PetscCommDuplicate(): Using internal PETSc communicator 92 143
[1] PCSetUp(): Setting up new PC
[3] PetscCommDuplicate(): Using internal PETSc communicator 92 143
[3] PetscCommDuplicate(): Using internal PETSc communicator 92 143
[1] PetscCommDuplicate(): Using internal PETSc communicator 92 143
[1] PetscCommDuplicate(): Using internal PETSc communicator 92 143
[3] PetscCommDuplicate(): Using internal PETSc communicator 92 143
[1] PetscCommDuplicate(): Using internal PETSc communicator 92 143
[1] PetscCommDuplicate(): Using internal PETSc communicator 92 143
[1] PetscCommDuplicate(): Using internal PETSc communicator 92 143
[0] KSPDefaultConverged(): user has provided nonzero initial guess,
computing 2-norm of preconditioned RHS
[0] KSPDefaultConverged(): Linear solver has converged. Residual norm
1.00217e-05 is less than relative tolerance 1e-05 times initial right hand
side norm 6.98447 at iteration 5
[0] MatStashScatterBegin_Private(): No of messages: 0
[0] MatAssemblyBegin_MPIAIJ(): Stash has 0 entries, uses 0 mallocs.
[0] MatAssemblyEnd_SeqAIJ(): Matrix size: 1600 X 1600; storage space: 774
unneeded,13626 used
[0] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0
[0] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 9
[0] Mat_CheckInode(): Found 1600 nodes out of 1600 rows. Not using Inode
routines
[1] MatStashScatterBegin_Private(): No of messages: 0
[1] MatAssemblyBegin_MPIAIJ(): Stash has 0 entries, uses 0 mallocs.
[1] MatAssemblyEnd_SeqAIJ(): Matrix size: 1600 X 1600; storage space: 1016
unneeded,13384 used
[1] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0
[1] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 9
[1] Mat_CheckInode(): Found 1600 nodes out of 1600 rows. Not using Inode
routines
[2] MatStashScatterBegin_Private(): No of messages: 0
[2] MatAssemblyBegin_MPIAIJ(): Stash has 0 entries, uses 0 mallocs.
[2] MatAssemblyEnd_SeqAIJ(): Matrix size: 1600 X 1600; storage space: 1016
unneeded,13384 used
[2] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0
[2] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 9
[2] Mat_CheckInode(): Found 1600 nodes out of 1600 rows. Not using Inode
routines
[3] MatStashScatterBegin_Private(): No of messages: 0
[3] MatAssemblyBegin_MPIAIJ(): Stash has 0 entries, uses 0 mallocs.
[3] MatAssemblyEnd_SeqAIJ(): Matrix size: 1600 X 1600; storage space: 1016
unneeded,13384 used
[3] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0
[3] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 9
[3] Mat_CheckInode(): Found 1600 nodes out of 1600 rows. Not using Inode
routines
[3] PetscCommDuplicate(): Using internal PETSc communicator 92 143
[1] PetscCommDuplicate(): Using internal PETSc communicator 92 143
[2] PetscCommDuplicate(): Using internal PETSc communicator 92 143
[0] PetscCommDuplicate(): Using internal PETSc communicator 92 143
[0] MatSetUpMultiply_MPIAIJ(): Using block index set to define scatter
[1] PetscCommDuplicate(): Using internal PETSc communicator 92 143
[3] PetscCommDuplicate(): Using internal PETSc communicator 92 143
[0] PetscCommDuplicate(): Using internal PETSc communicator 92 143
[2] PetscCommDuplicate(): Using internal PETSc communicator 92 143
[0] VecScatterCreateCommon_PtoS(): Using blocksize 1 scatter
[2] VecScatterCreateCommon_PtoS(): Using blocksize 1 scatter
[2] MatSetOption_Inode(): Not using Inode routines due to
MatSetOption(MAT_DO_NOT_USE_INODES
[2] MatAssemblyEnd_SeqAIJ(): Matrix size: 1600 X 320; storage space: 13444
unneeded,956 used
[2] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0
[0] VecScatterCreate(): General case: MPI to Seq
[2] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 3
[2] Mat_CheckCompressedRow(): Found the ratio (num_zerorows
1280)/(num_localrows 1600) > 0.6. Use CompressedRow routines.
[0] MatSetOption_Inode(): Not using Inode routines due to
MatSetOption(MAT_DO_NOT_USE_INODES
[0] MatAssemblyEnd_SeqAIJ(): Matrix size: 1600 X 160; storage space: 13922
unneeded,478 used
[0] MatSetOption_Inode(): Not using Inode routines due to
MatSetOption(MAT_DO_NOT_USE_INODES
[0] MatAssemblyEnd_SeqAIJ(): Matrix size: 1600 X 160; storage space: 13922
unneeded,478 used
[0] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0
[0] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 3
[0] Mat_CheckCompressedRow(): Found the ratio (num_zerorows
1440)/(num_localrows 1600) > 0.6. Use CompressedRow routines.
[3] VecScatterCreateCommon_PtoS(): Using blocksize 1 scatter
[3] MatSetOption_Inode(): Not using Inode routines due to
MatSetOption(MAT_DO_NOT_USE_INODES
[3] MatAssemblyEnd_SeqAIJ(): Matrix size: 1600 X 160; storage space: 13922
unneeded,478 used
[3] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0
[3] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 3
[3] Mat_CheckCompressedRow(): Found the ratio (num_zerorows
1440)/(num_localrows 1600) > 0.6. Use CompressedRow routines.
[1] VecScatterCreateCommon_PtoS(): Using blocksize 1 scatter
[1] MatSetOption_Inode(): Not using Inode routines due to
MatSetOption(MAT_DO_NOT_USE_INODES
[1] MatAssemblyEnd_SeqAIJ(): Matrix size: 1600 X 320; storage space: 13444
unneeded,956 used
[1] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0
[1] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 3
[1] Mat_CheckCompressedRow(): Found the ratio (num_zerorows
1280)/(num_localrows 1600) > 0.6. Use CompressedRow routines.
[1] VecAssemblyBegin_MPI(): Stash has 0 entries, uses 0 mallocs.
[3] VecAssemblyBegin_MPI(): Stash has 0 entries, uses 0 mallocs.
[1] VecAssemblyBegin_MPI(): Block-Stash has 0 entries, uses 0 mallocs.
[3] VecAssemblyBegin_MPI(): Block-Stash has 0 entries, uses 0 mallocs.
[2] VecAssemblyBegin_MPI(): Stash has 0 entries, uses 0 mallocs.
[2] VecAssemblyBegin_MPI(): Block-Stash has 0 entries, uses 0 mallocs.
[0] VecAssemblyBegin_MPI(): Stash has 0 entries, uses 0 mallocs.
[0] VecAssemblyBegin_MPI(): Block-Stash has 0 entries, uses 0 mallocs.
[0] VecAssemblyBegin_MPI(): Stash has 0 entries, uses 0 mallocs.
[2] VecAssemblyBegin_MPI(): Stash has 0 entries, uses 0 mallocs.
[2] VecAssemblyBegin_MPI(): Block-Stash has 0 entries, uses 0 mallocs.
[0] VecAssemblyBegin_MPI(): Block-Stash has 0 entries, uses 0 mallocs.
[1] VecAssemblyBegin_MPI(): Stash has 0 entries, uses 0 mallocs.
[3] VecAssemblyBegin_MPI(): Stash has 0 entries, uses 0 mallocs.
[3] VecAssemblyBegin_MPI(): Block-Stash has 0 entries, uses 0 mallocs.
[1] VecAssemblyBegin_MPI(): Block-Stash has 0 entries, uses 0 mallocs.
[3] PetscCommDuplicate(): Using internal PETSc communicator 92 143
[3] PetscCommDuplicate(): Using internal PETSc communicator 92 143
[3] PetscCommDuplicate(): Using internal PETSc communicator 92 143
[3] PetscCommDuplicate(): Using internal PETSc communicator 92 143
[3] PCSetUp(): Setting up new PC
[3] PetscCommDuplicate(): Using internal PETSc communicator 92 143
[3] PetscCommDuplicate(): Using internal PETSc communicator 92 143
[3] PetscCommDuplicate(): Using internal PETSc communicator 92 143
[1] PetscCommDuplicate(): Using internal PETSc communicator 92 143
[1] PetscCommDuplicate(): Using internal PETSc communicator 92 143
[1] PetscCommDuplicate(): Using internal PETSc communicator 92 143
[1] PetscCommDuplicate(): Using internal PETSc communicator 92 143
[1] PCSetUp(): Setting up new PC
[1] PetscCommDuplicate(): Using internal PETSc communicator 92 143
[1] PetscCommDuplicate(): Using internal PETSc communicator 92 143
[1] PetscCommDuplicate(): Using internal PETSc communicator 92 143
[2] PetscCommDuplicate(): Using internal PETSc communicator 92 143
[2] PetscCommDuplicate(): Using internal PETSc communicator 92 143
[2] PetscCommDuplicate(): Using internal PETSc communicator 92 143
[2] PetscCommDuplicate(): Using internal PETSc communicator 92 143
[2] PCSetUp(): Setting up new PC
[2] PetscCommDuplicate(): Using internal PETSc communicator 92 143
[2] PetscCommDuplicate(): Using internal PETSc communicator 92 143
[2] PetscCommDuplicate(): Using internal PETSc communicator 92 143
[0] PCSetUp(): Setting up new PC
[0] PetscCommDuplicate(): Using internal PETSc communicator 92 143
[0] PCSetUp(): Setting up new PC
[0] PetscCommDuplicate(): Using internal PETSc communicator 92 143
[0] PetscCommDuplicate(): Using internal PETSc communicator 92 143
[0] PetscCommDuplicate(): Using internal PETSc communicator 92 143
[0] PetscCommDuplicate(): Using internal PETSc communicator 92 143
[0] PCSetUp(): Setting up new PC
[0] PetscCommDuplicate(): Using internal PETSc communicator 92 143
[0] PetscCommDuplicate(): Using internal PETSc communicator 92 143
[0] PetscCommDuplicate(): Using internal PETSc communicator 92 143
[0] KSPDefaultConverged(): Linear solver has converged. Residual norm
8.84097e-05 is less than relative tolerance 1e-05 times initial right hand
side norm 8.96753 at iteration 212
           1  1.000000000000000E-002   1.15678640520876
  0.375502846664950


i saw some statements stating "seq". am i running in sequential or parallel
mode? have i preallocated too much space?

lastly, if Ax=b, A_sta and A_end from
MatGetOwnershipRange<file:///D:/PhD/CODES/petsc-2.3.2-p6/docs/manualpages/Mat/MatGetOwnershipRange.html#MatGetOwnershipRange>
and
b_sta and b_end from
VecGetOwnershipRange<file:///D:/PhD/CODES/petsc-2.3.2-p6/docs/manualpages/Vec/VecGetOwnershipRange.html#VecGetOwnershipRange>
should
always be the same value, right?

Thank you.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20070208/58eaf902/attachment.htm>


More information about the petsc-users mailing list