[0] PetscInitialize(): PETSc successfully started: number of processors = 1 [0] PetscInitialize(): Running on machine: localhost.localdomain ------------------------------------------------- Joab a 3D fluid-structure interaction solver by Yuqi Wu compiled on Apr 3 2012 revision $Rev: 2 $ ------------------------------------------------- Command-line options: -Lambda 1.73086e6 -Nplot 1 -Nsteps 1 -Pin -coarse_ksp_gmres_restart 100 -coarse_ksp_max_it 1000 -coarse_ksp_monitor -coarse_ksp_pc_side right -coarse_ksp_rtol 1.e-3 -coarse_ksp_type gmres -coarse_pc_type asm -coarse_sub_pc_type lu -coarsegrid /home/stuwyq/Grid3D/3Dfsi/3Dfsi.fsi -f /home/stuwyq/Grid3D/3Dfsi/3DfsiA.fsi -final_time 0.0001 -fine_sub_pc_type lu -geometric_asm -geometric_asm_overlap 1 -info -initial_time 0.0 -ksp_atol 1.e-14 -ksp_gmres_restart 100 -ksp_max_it 1000 -ksp_monitor -ksp_pc_side right -ksp_rtol 1.e-6 -ksp_type fgmres -laplace -log_summary -mat_partitioning_type parmetis -mg -mu 1.1538e6 -nest_ksp_pc_side right -nest_ksp_rtol 1.e-6 -nest_sub_pc_type lu -nobile -output 3DfsiA -scale_mu -snes_max_it 10 -snes_rtol 1.e-7 -snes_view -solid_density 1.2 -solution_output -timestep 1 -two_level -viscosity 0.03 ------------------------------------------------- 1. Loading the options and grid informations. vcycle=0,cascade=0,vdown=0 ,mg=1 inlet velocity = 7.500000 ----------------Parameters Specification---------------- Time-dependent FSI problem Using Backward Euler timestep scheme with dt = 0.000100, initial time = 0.000000, final time = 0.000100 Fluid Parameters: viscosity = 0.030000, fluid density = 1.000000 Solid Parameters: Lambda = 1.730860e+06, mu = 1.153800e+06, solid density = 1.200000 Boundary Flags: vout_flag =0, vin_flag = 0, pin_flag = 1, dxdy_flag = 1, wall_flag = 0 ----------------Mesh Specification---------------- FEM mesh /home/stuwyq/Grid3D/3Dfsi/3DfsiA.fsi has total elem = 5886, total vert = 11585, total fsi elem = 776 Fluid inlet sideset with 24 faces Fluid outlet sideset with 24 faces Solid outside sideset with 56 faces [0] PetscCommDuplicate(): Duplicating a communicator 1140850688 -2080374784 max tags = 2147483647 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 [0] PetscCommDuplicate(): Duplicating a communicator 1140850689 -2080374783 max tags = 2147483647 [0] VecScatterCreate(): Sequential vector scatter with block indices [0] Petsc_DelComm(): Removing reference to PETSc communicator imbedded in a user MPI_Comm m -2080374783 [0] Petsc_DelComm(): User MPI_Comm m 1140850689 is being freed, removing reference from inner PETSc comm to this outer comm [0] PetscCommDestroy(): Deleting PETSc MPI_Comm -2080374783 [0] Petsc_DelCounter(): Deleting counter data in an MPI_Comm -2080374783 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 [0] PetscCommDuplicate(): Duplicating a communicator 1140850689 -2080374783 max tags = 2147483647 [0] VecScatterCreate(): Sequential vector scatter with block indices [0] Petsc_DelComm(): Removing reference to PETSc communicator imbedded in a user MPI_Comm m -2080374783 [0] Petsc_DelComm(): User MPI_Comm m 1140850689 is being freed, removing reference from inner PETSc comm to this outer comm [0] PetscCommDestroy(): Deleting PETSc MPI_Comm -2080374783 [0] Petsc_DelCounter(): Deleting counter data in an MPI_Comm -2080374783 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 [0] PetscCommDuplicate(): Duplicating a communicator 1140850689 -2080374783 max tags = 2147483647 [0] VecScatterCreate(): Sequential vector scatter with block indices [0] Petsc_DelComm(): Removing reference to PETSc communicator imbedded in a user MPI_Comm m -2080374783 [0] Petsc_DelComm(): User MPI_Comm m 1140850689 is being freed, removing reference from inner PETSc comm to this outer comm [0] PetscCommDestroy(): Deleting PETSc MPI_Comm -2080374783 [0] Petsc_DelCounter(): Deleting counter data in an MPI_Comm -2080374783 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 [0] PetscCommDuplicate(): Duplicating a communicator 1140850689 -2080374783 max tags = 2147483647 [0] VecScatterCreate(): Special case: sequential copy [0] Petsc_DelComm(): Removing reference to PETSc communicator imbedded in a user MPI_Comm m -2080374783 [0] Petsc_DelComm(): User MPI_Comm m 1140850689 is being freed, removing reference from inner PETSc comm to this outer comm [0] PetscCommDestroy(): Deleting PETSc MPI_Comm -2080374783 [0] Petsc_DelCounter(): Deleting counter data in an MPI_Comm -2080374783 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 [0] PetscCommDuplicate(): Duplicating a communicator 1140850689 -2080374783 max tags = 2147483647 [0] VecScatterCreate(): Special case: sequential copy [0] Petsc_DelComm(): Removing reference to PETSc communicator imbedded in a user MPI_Comm m -2080374783 [0] Petsc_DelComm(): User MPI_Comm m 1140850689 is being freed, removing reference from inner PETSc comm to this outer comm [0] PetscCommDestroy(): Deleting PETSc MPI_Comm -2080374783 [0] Petsc_DelCounter(): Deleting counter data in an MPI_Comm -2080374783 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 [0] PetscCommDuplicate(): Duplicating a communicator 1140850689 -2080374783 max tags = 2147483647 [0] VecScatterCreate(): Special case: sequential copy [0] Petsc_DelComm(): Removing reference to PETSc communicator imbedded in a user MPI_Comm m -2080374783 [0] Petsc_DelComm(): User MPI_Comm m 1140850689 is being freed, removing reference from inner PETSc comm to this outer comm [0] PetscCommDestroy(): Deleting PETSc MPI_Comm -2080374783 [0] Petsc_DelCounter(): Deleting counter data in an MPI_Comm -2080374783 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 [0] PetscCommDuplicate(): Duplicating a communicator 1140850689 -2080374783 max tags = 2147483647 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 [0] VecScatterCreate(): Sequential vector scatter with block indices [0] Petsc_DelComm(): Removing reference to PETSc communicator imbedded in a user MPI_Comm m -2080374783 [0] Petsc_DelComm(): User MPI_Comm m 1140850689 is being freed, removing reference from inner PETSc comm to this outer comm [0] PetscCommDestroy(): Deleting PETSc MPI_Comm -2080374783 [0] Petsc_DelCounter(): Deleting counter data in an MPI_Comm -2080374783 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 1 processors has outlet nodes [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 [0] PetscCommDuplicate(): Duplicating a communicator 1140850689 -2080374783 max tags = 2147483647 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 [0] VecScatterCreate(): Special case: sequential vector stride to stride [0] Petsc_DelComm(): Removing reference to PETSc communicator imbedded in a user MPI_Comm m -2080374783 [0] Petsc_DelComm(): User MPI_Comm m 1140850689 is being freed, removing reference from inner PETSc comm to this outer comm [0] PetscCommDestroy(): Deleting PETSc MPI_Comm -2080374783 [0] Petsc_DelCounter(): Deleting counter data in an MPI_Comm -2080374783 [0] PetscCommDuplicate(): Duplicating a communicator 1140850689 -2080374783 max tags = 2147483647 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850689 -2080374783 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850689 -2080374783 [0] VecScatterCreate(): Sequential vector scatter with block indices [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850689 -2080374783 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850689 -2080374783 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 [0] VecScatterCreate(): Special case: sequential vector stride to stride [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850689 -2080374783 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850689 -2080374783 [0] VecScatterCreate(): Sequential vector scatter with block indices [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850689 -2080374783 ----------------Mesh Specification---------------- FEM mesh /home/stuwyq/Grid3D/3Dfsi/3Dfsi.fsi has total elem = 1833, total vert = 4186, total fsi elem = 308 Fluid inlet sideset with 8 faces Fluid outlet sideset with 8 faces Solid outside sideset with 36 faces [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850689 -2080374783 [0] VecScatterCreate(): Sequential vector scatter with block indices [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850689 -2080374783 [0] VecScatterCreate(): Sequential vector scatter with block indices [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850689 -2080374783 [0] VecScatterCreate(): Sequential vector scatter with block indices [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850689 -2080374783 [0] VecScatterCreate(): Special case: sequential copy [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850689 -2080374783 [0] VecScatterCreate(): Special case: sequential copy [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850689 -2080374783 [0] VecScatterCreate(): Special case: sequential copy [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850689 -2080374783 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 [0] VecScatterCreate(): Sequential vector scatter with block indices [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 1 processors has outlet nodes [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850689 -2080374783 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 [0] VecScatterCreate(): Special case: sequential vector stride to stride [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850689 -2080374783 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850689 -2080374783 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850689 -2080374783 [0] VecScatterCreate(): Sequential vector scatter with block indices [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850689 -2080374783 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850689 -2080374783 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 [0] VecScatterCreate(): Special case: sequential vector stride to stride [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850689 -2080374783 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850689 -2080374783 [0] VecScatterCreate(): Sequential vector scatter with block indices [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850689 -2080374783 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 [0] MatAssemblyEnd_SeqAIJ(): Matrix size: 11585 X 4186; storage space: 47995 unneeded,125780 used [0] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0 [0] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 12 [0] Mat_CheckInode(): Found 4931 nodes of 11585. Limit used: 5. Using Inode routines ----------------Mesh Partition---------------- [0] has 11585 degrees of freedom (matrix), 11585 degrees of freedom (including shared points). [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 ----------------ASM has 1 layers of overlaps---------------- [0] has 11585 degrees of freedom (inluding overlaps). [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 ----------------Coarse Mesh Partition---------------- [0] has 4186 degrees of freedom (matrix), 4186 degrees of freedom (including shared points). [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 ----------------ASM has 1 layers of overlaps---------------- [0] has 4186 degrees of freedom (inluding overlaps). [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 2. Setup the Parallel vector and matrix. [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850689 -2080374783 [0] VecScatterCreate(): Special case: sequential vector general to stride [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850689 -2080374783 [0] VecScatterCreate(): Special case: sequential vector general to stride [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850689 -2080374783 [0] VecScatterCreate(): Special case: sequential vector general to stride [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850689 -2080374783 [0] VecScatterCreate(): Special case: sequential vector general to stride [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850689 -2080374783 [0] VecScatterCreate(): Special case: sequential vector general to stride [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850689 -2080374783 [0] VecScatterCreate(): Special case: sequential vector general to stride [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850689 -2080374783 [0] VecScatterCreate(): Special case: sequential vector general to stride [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850689 -2080374783 [0] VecScatterCreate(): Special case: sequential vector general to stride [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 3. Solving the nonlinear problem. Output Solution Basic file Outputting to grid basic file 3DfsiA_basic.mvtk... [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 [0] VecAssemblyBegin_MPI(): Stash has 0 entries, uses 0 mallocs. [0] VecAssemblyBegin_MPI(): Block-Stash has 0 entries, uses 0 mallocs. [0] Petsc_DelViewer(): Removing viewer data attribute in an MPI_Comm -2080374784 output elapsed 26.618530 sec Current time = 0.000000 Form the initial condition from given functions: LOADING the zero solution as initial condition [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 [0] Petsc_DelViewer(): Removing viewer data attribute in an MPI_Comm -2080374784 [0] VecAssemblyBegin_MPI(): Stash has 0 entries, uses 0 mallocs. [0] VecAssemblyBegin_MPI(): Block-Stash has 0 entries, uses 0 mallocs. Current time = 0.000100 [0] MatAssemblyEnd_SeqAIJ(): Matrix size: 11585 X 11585; storage space: 1685660 unneeded,167940 used [0] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0 [0] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 72 [0] Mat_CheckInode(): Found 5968 nodes of 11585. Limit used: 5. Using Inode routines [0] MatAssemblyEnd_SeqAIJ(): Matrix size: 11585 X 11585; storage space: 2515604 unneeded,380646 used [0] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0 [0] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 128 [0] Mat_CheckInode(): Found 8131 nodes of 11585. Limit used: 5. Using Inode routines [0] VecAssemblyBegin_MPI(): Stash has 0 entries, uses 0 mallocs. [0] VecAssemblyBegin_MPI(): Block-Stash has 0 entries, uses 0 mallocs. [0] VecAssemblyBegin_MPI(): Stash has 0 entries, uses 0 mallocs. [0] VecAssemblyBegin_MPI(): Block-Stash has 0 entries, uses 0 mallocs. [0] PetscSplitReductionGet(): Putting reduction data in an MPI_Comm -2080374784 0 SNES norm 1.014991e+02, 0 KSP its (nan coarse its average), last norm 0.000000e+00. [0] MatAssemblyEnd_SeqAIJ(): Matrix size: 11585 X 11585; storage space: 1383785 unneeded,469935 used [0] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 8 [0] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 168 [0] Mat_CheckInode(): Found 8131 nodes of 11585. Limit used: 5. Using Inode routines [0] MatAssemblyEnd_SeqAIJ(): Matrix size: 11585 X 11585; storage space: 1596 unneeded,468339 used [0] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0 [0] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 168 [0] Mat_CheckInode(): Found 8159 nodes of 11585. Limit used: 5. Using Inode routines [0] MatAssemblyEnd_SeqAIJ(): Matrix size: 11585 X 11585; storage space: 10242 unneeded,458097 used [0] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0 [0] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 168 [0] Mat_CheckInode(): Found 8271 nodes of 11585. Limit used: 5. Using Inode routines [0] MatAssemblyEnd_SeqAIJ(): Matrix size: 4186 X 4186; storage space: 527396 unneeded,142364 used [0] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0 [0] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 140 [0] Mat_CheckInode(): Found 3164 nodes of 4186. Limit used: 5. Using Inode routines [0] MatAssemblyEnd_SeqAIJ(): Matrix size: 4186 X 4186; storage space: 261 unneeded,142103 used [0] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0 [0] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 140 [0] Mat_CheckInode(): Found 3168 nodes of 4186. Limit used: 5. Using Inode routines [0] MatAssemblyEnd_SeqAIJ(): Matrix size: 4186 X 4186; storage space: 6624 unneeded,135479 used [0] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0 [0] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 140 [0] Mat_CheckInode(): Found 3240 nodes of 4186. Limit used: 5. Using Inode routines [0] PCSetUp(): Setting up new PC [0] PCSetUp_MG(): Using outer operators to define finest grid operator because PCMGGetSmoother(pc,nlevels-1,&ksp);KSPSetOperators(ksp,...); was not called. [0] MatGetSymbolicTranspose_SeqAIJ(): Getting Symbolic Transpose. [0] MatAssemblyEnd_SeqAIJ(): Matrix size: 4186 X 4186; storage space: 0 unneeded,656174 used [0] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0 [0] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 410 [0] Mat_CheckInode(): Found 1794 nodes of 4186. Limit used: 5. Using Inode routines [0] MatRestoreSymbolicTranspose_SeqAIJ(): Restoring Symbolic Transpose. [0] MatPtAPSymbolic_SeqAIJ_SeqAIJ(): Reallocs 1; Fill ratio: given 1 needed 1.43239. [0] MatPtAPSymbolic_SeqAIJ_SeqAIJ(): Use MatPtAP(A,P,MatReuse,1.43239,&C) for best performance. [0] MatAssemblyEnd_SeqAIJ(): Matrix size: 4186 X 4186; storage space: 0 unneeded,656174 used [0] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0 [0] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 410 [0] PCSetUp(): Setting up new PC [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850689 -2080374783 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850689 -2080374783 [0] VecScatterCreate(): Special case: sequential vector general to stride [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850689 -2080374783 [0] MatAssemblyEnd_SeqAIJ(): Matrix size: 11585 X 11585; storage space: 0 unneeded,458097 used [0] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0 [0] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 168 [0] Mat_CheckInode(): Found 8271 nodes of 11585. Limit used: 5. Using Inode routines [0] PCSetUp(): Setting up new PC [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850689 -2080374783 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850689 -2080374783 [0] VecScatterCreate(): Special case: sequential vector general to stride [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850689 -2080374783 [0] MatAssemblyEnd_SeqAIJ(): Matrix size: 4186 X 4186; storage space: 0 unneeded,656174 used [0] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0 [0] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 410 [0] Mat_CheckInode(): Found 1794 nodes of 4186. Limit used: 5. Using Inode routines 0 KSP Residual norm 1.014990964599e+02 [0] PCSetUp(): Setting up new PC [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850689 -2080374783 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850689 -2080374783 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850689 -2080374783 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850689 -2080374783 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850689 -2080374783 [0] MatLUFactorSymbolic_SeqAIJ(): Reallocs 2 Fill ratio:given 5 needed 11.401 [0] MatLUFactorSymbolic_SeqAIJ(): Run with -pc_factor_fill 11.401 or use [0] MatLUFactorSymbolic_SeqAIJ(): PCFactorSetFill(pc,11.401); [0] MatLUFactorSymbolic_SeqAIJ(): for best performance. [0] Mat_CheckInode_FactorLU(): Found 8057 nodes of 11585. Limit used: 5. Using Inode routines [0] KSPDefaultConverged(): Linear solver has converged. Residual norm 5.755920981112e-13 is less than relative tolerance 1.000000000000e-05 times initial right hand side norm 5.479558824115e+02 at iteration 1 [0] PCSetUp(): Setting up new PC [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850689 -2080374783 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850689 -2080374783 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850689 -2080374783 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850689 -2080374783 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850689 -2080374783 [0] MatLUFactorSymbolic_SeqAIJ(): Reallocs 1 Fill ratio:given 5 needed 7.07175 [0] MatLUFactorSymbolic_SeqAIJ(): Run with -pc_factor_fill 7.07175 or use [0] MatLUFactorSymbolic_SeqAIJ(): PCFactorSetFill(pc,7.07175); [0] MatLUFactorSymbolic_SeqAIJ(): for best performance. [0] Mat_CheckInode_FactorLU(): Found 1764 nodes of 4186. Limit used: 5. Using Inode routines Residual norms for coarse_ solve. 0 KSP Residual norm 5.698312810532e-16 [0] KSPDefaultConverged(): Linear solver has converged. Residual norm 3.601438196048e-30 is less than relative tolerance 1.000000000000e-03 times initial right hand side norm 5.698312810532e-16 at iteration 1 1 KSP Residual norm 3.601438196048e-30 [0] KSPDefaultConverged(): user has provided nonzero initial guess, computing 2-norm of preconditioned RHS [0] KSPDefaultConverged(): Linear solver has converged. Residual norm 1.036978835722e-12 is less than relative tolerance 1.000000000000e-05 times initial right hand side norm 5.479558824115e+02 at iteration 0 [0] KSPDefaultConverged(): Linear solver has converged. Residual norm 2.961124920270e-13 is less than relative tolerance 1.000000000000e-06 times initial right hand side norm 1.014990964599e+02 at iteration 1 1 KSP Residual norm 2.961124920270e-13 [0] SNESSolve_LS(): iter=0, linear solve iterations=1 [0] SNESLSCheckResidual_Private(): ||J^T(F-Ax)||/||F-AX|| 8.414999156797e+00 near zero implies inconsistent rhs [0] MatAssemblyEnd_SeqAIJ(): Matrix size: 11585 X 11585; storage space: 0 unneeded,167940 used [0] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0 [0] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 72 [0] MatAssemblyEnd_SeqAIJ(): Matrix size: 11585 X 11585; storage space: 0 unneeded,380646 used [0] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0 [0] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 128 [0] VecAssemblyBegin_MPI(): Stash has 0 entries, uses 0 mallocs. [0] VecAssemblyBegin_MPI(): Block-Stash has 0 entries, uses 0 mallocs. [0] VecAssemblyBegin_MPI(): Stash has 0 entries, uses 0 mallocs. [0] VecAssemblyBegin_MPI(): Block-Stash has 0 entries, uses 0 mallocs. [0] SNESLineSearchCubic(): Initial fnorm 1.014990964599e+02 gnorm 9.922628060272e-05 [0] SNESSolve_LS(): fnorm=1.0149909645993024e+02, gnorm=9.9226280602722226e-05, ynorm=5.5617026964667733e+04, lssucceed=1 1 SNES norm 9.922628e-05, 1 KSP its (1.00 coarse its average), last norm 2.961125e-13. [0] MatAssemblyEnd_SeqAIJ(): Matrix size: 11585 X 11585; storage space: 2787 unneeded,469935 used [0] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 975 [0] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 168 [0] Mat_CheckInode(): Found 8131 nodes of 11585. Limit used: 5. Using Inode routines [0] MatAssemblyEnd_SeqAIJ(): Matrix size: 11585 X 11585; storage space: 1596 unneeded,468339 used [0] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0 [0] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 168 [0] Mat_CheckInode(): Found 8159 nodes of 11585. Limit used: 5. Using Inode routines [0] MatAssemblyEnd_SeqAIJ(): Matrix size: 11585 X 11585; storage space: 10242 unneeded,458097 used [0] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0 [0] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 168 [0] Mat_CheckInode(): Found 8271 nodes of 11585. Limit used: 5. Using Inode routines [0] MatAssemblyEnd_SeqAIJ(): Matrix size: 4186 X 4186; storage space: 1260 unneeded,142364 used [0] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 543 [0] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 140 [0] Mat_CheckInode(): Found 3164 nodes of 4186. Limit used: 5. Using Inode routines [0] MatAssemblyEnd_SeqAIJ(): Matrix size: 4186 X 4186; storage space: 261 unneeded,142103 used [0] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0 [0] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 140 [0] Mat_CheckInode(): Found 3168 nodes of 4186. Limit used: 5. Using Inode routines [0] MatAssemblyEnd_SeqAIJ(): Matrix size: 4186 X 4186; storage space: 6624 unneeded,135479 used [0] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0 [0] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 140 [0] Mat_CheckInode(): Found 3240 nodes of 4186. Limit used: 5. Using Inode routines [0] PCSetUp(): Setting up PC with different nonzero pattern [0] MatAssemblyEnd_SeqAIJ(): Matrix size: 4186 X 4186; storage space: 0 unneeded,656174 used [0] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0 [0] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 410 [0] PCSetUp(): Setting up PC with different nonzero pattern [0] MatAssemblyEnd_SeqAIJ(): Matrix size: 4186 X 4186; storage space: 0 unneeded,656174 used [0] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0 [0] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 410 [0] Mat_CheckInode(): Found 1794 nodes of 4186. Limit used: 5. Using Inode routines 0 KSP Residual norm 9.922628060272e-05 [0] PCSetUp(): Setting up PC with different nonzero pattern [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850689 -2080374783 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850689 -2080374783 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850689 -2080374783 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850689 -2080374783 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850689 -2080374783 [0] MatLUFactorSymbolic_SeqAIJ(): Reallocs 1 Fill ratio:given 5 needed 7.07175 [0] MatLUFactorSymbolic_SeqAIJ(): Run with -pc_factor_fill 7.07175 or use [0] MatLUFactorSymbolic_SeqAIJ(): PCFactorSetFill(pc,7.07175); [0] MatLUFactorSymbolic_SeqAIJ(): for best performance. [0] Mat_CheckInode_FactorLU(): Found 1764 nodes of 4186. Limit used: 5. Using Inode routines Residual norms for coarse_ solve. 0 KSP Residual norm 2.792963779592e-03 [0] KSPDefaultConverged(): Linear solver has converged. Residual norm 5.879790825362e-16 is less than relative tolerance 1.000000000000e-03 times initial right hand side norm 2.792963779592e-03 at iteration 1 1 KSP Residual norm 5.879790825362e-16 [0] KSPDefaultConverged(): user has provided nonzero initial guess, computing 2-norm of preconditioned RHS [0] KSPDefaultConverged(): Linear solver has converged. Residual norm 5.498381257694e-01 is less than relative tolerance 1.000000000000e-05 times initial right hand side norm 2.574184291251e+05 at iteration 1 1 KSP Residual norm 2.458835339854e-10 Residual norms for coarse_ solve. 0 KSP Residual norm 2.998256760232e-03 [0] KSPDefaultConverged(): Linear solver has converged. Residual norm 3.661186817260e-16 is less than relative tolerance 1.000000000000e-03 times initial right hand side norm 2.998256760232e-03 at iteration 1 1 KSP Residual norm 3.661186817260e-16 [0] KSPDefaultConverged(): user has provided nonzero initial guess, computing 2-norm of preconditioned RHS [0] KSPDefaultConverged(): Linear solver has converged. Residual norm 8.906356071205e-01 is less than relative tolerance 1.000000000000e-05 times initial right hand side norm 1.270205793273e+05 at iteration 1 [0] KSPDefaultConverged(): Linear solver has converged. Residual norm 8.827915897294e-16 is less than absolute tolerance 1.000000000000e-14 at iteration 2 2 KSP Residual norm 8.827915897294e-16 [0] SNESSolve_LS(): iter=1, linear solve iterations=2 [0] SNESLSCheckResidual_Private(): ||J^T(F-Ax)||/||F-AX|| 1.745471524738e-01 near zero implies inconsistent rhs [0] MatAssemblyEnd_SeqAIJ(): Matrix size: 11585 X 11585; storage space: 0 unneeded,167940 used [0] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0 [0] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 72 [0] MatAssemblyEnd_SeqAIJ(): Matrix size: 11585 X 11585; storage space: 0 unneeded,380646 used [0] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0 [0] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 128 [0] VecAssemblyBegin_MPI(): Stash has 0 entries, uses 0 mallocs. [0] VecAssemblyBegin_MPI(): Block-Stash has 0 entries, uses 0 mallocs. [0] VecAssemblyBegin_MPI(): Stash has 0 entries, uses 0 mallocs. [0] VecAssemblyBegin_MPI(): Block-Stash has 0 entries, uses 0 mallocs. [0] SNESLineSearchCubic(): Initial fnorm 9.922628060272e-05 gnorm 1.397280661555e-09 [0] SNESSolve_LS(): fnorm=9.9226280602722226e-05, gnorm=1.3972806615552385e-09, ynorm=2.5538780419569811e+01, lssucceed=1 2 SNES norm 1.397281e-09, 2 KSP its (1.00 coarse its average), last norm 8.827916e-16. [0] SNESDefaultConverged(): Converged due to function norm 1.397280661555e-09 < 1.014990964599e-05 (relative tolerance) SNES Object: 1 MPI processes type: ls line search variant: SNESLineSearchCubic alpha=1.000000000000e-04, maxstep=1.000000000000e+08, minlambda=1.000000000000e-12 maximum iterations=10, maximum function evaluations=10000 tolerances: relative=1e-07, absolute=1e-50, solution=1e-08 total number of linear solver iterations=3 total number of function evaluations=3 KSP Object: 1 MPI processes type: fgmres GMRES: restart=100, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement GMRES: happy breakdown tolerance 1e-30 maximum iterations=1000, initial guess is zero tolerances: relative=1e-06, absolute=1e-14, divergence=10000 right preconditioning using UNPRECONDITIONED norm type for convergence test PC Object: 1 MPI processes type: mg MG: type is MULTIPLICATIVE, levels=2 cycles=v Cycles per PCApply=1 Using Galerkin computed coarse grid matrices Coarse grid solver -- level ------------------------------- KSP Object: (coarse_) 1 MPI processes type: gmres GMRES: restart=100, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement GMRES: happy breakdown tolerance 1e-30 maximum iterations=1000, initial guess is zero tolerances: relative=0.001, absolute=1e-50, divergence=10000 right preconditioning using UNPRECONDITIONED norm type for convergence test PC Object: (coarse_) 1 MPI processes type: asm Additive Schwarz: total subdomain blocks = 1, user-defined overlap Additive Schwarz: restriction/interpolation type - RESTRICT Local solve is same for all blocks, in the following KSP and PC objects: KSP Object: (coarse_sub_) 1 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (coarse_sub_) 1 MPI processes type: lu LU: out-of-place factorization tolerance for zero pivot 1e-12 matrix ordering: nd factor fill ratio given 5, needed 7.07175 Factored matrix follows: Matrix Object: 1 MPI processes type: seqaij rows=4186, cols=4186 package used to perform factorization: petsc total: nonzeros=4640301, allocated nonzeros=4640301 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 1764 nodes, limit used is 5 linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=4186, cols=4186 total: nonzeros=656174, allocated nonzeros=656174 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 1794 nodes, limit used is 5 linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=4186, cols=4186 total: nonzeros=656174, allocated nonzeros=0 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 1794 nodes, limit used is 5 Down solver (pre-smoother) on level 1 ------------------------------- KSP Object: (mg_levels_1_) 1 MPI processes type: richardson Richardson: damping factor=1 maximum iterations=1 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using PRECONDITIONED norm type for convergence test PC Object: (fine_) 1 MPI processes type: asm Additive Schwarz: total subdomain blocks = 1, user-defined overlap Additive Schwarz: restriction/interpolation type - RESTRICT Local solve is same for all blocks, in the following KSP and PC objects: KSP Object: (fine_sub_) 1 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (fine_sub_) 1 MPI processes type: lu LU: out-of-place factorization tolerance for zero pivot 1e-12 matrix ordering: nd factor fill ratio given 5, needed 11.401 Factored matrix follows: Matrix Object: 1 MPI processes type: seqaij rows=11585, cols=11585 package used to perform factorization: petsc total: nonzeros=5222778, allocated nonzeros=5222778 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 8057 nodes, limit used is 5 linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=11585, cols=11585 total: nonzeros=458097, allocated nonzeros=458097 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 8271 nodes, limit used is 5 linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=11585, cols=11585 total: nonzeros=458097, allocated nonzeros=1868345 total number of mallocs used during MatSetValues calls =983 using I-node routines: found 8271 nodes, limit used is 5 Up solver (post-smoother) same as down solver (pre-smoother) linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=11585, cols=11585 total: nonzeros=458097, allocated nonzeros=1868345 total number of mallocs used during MatSetValues calls =983 using I-node routines: found 8271 nodes, limit used is 5 SNES converged: CONVERGED_FNORM_RELATIVE. TS time of 75.694302 sec makes total time 0.000000 sec. inflow=2.460036e+00 outflow=5.191675e-10, pin=1.168742e+04, pout=-6.477746e-09 Outputting to matlab file 3DfsiA1.mvtk... [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 output elapsed 0.135469 sec makes total 0.135469 sec Total number of timesteps = 1 Total number of nonlinear iterations = 0 Average nonlinear iterations per timestep = 0.000000 Total number of linear iterations = 0 Average number of linear iterations = nan Average walltime per timestep (excluding output) = nan Latex line (df): 11585& 1& 0.000& nan& nan \\ [0] Petsc_DelComm(): Removing reference to PETSc communicator imbedded in a user MPI_Comm m -2080374783 [0] Petsc_DelComm(): User MPI_Comm m 1140850689 is being freed, removing reference from inner PETSc comm to this outer comm [0] PetscCommDestroy(): Deleting PETSc MPI_Comm -2080374783 [0] Petsc_DelCounter(): Deleting counter data in an MPI_Comm -2080374783 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 [0] PetscFinalize(): PetscFinalize() called [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 ************************************************************************************************************************ *** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document *** ************************************************************************************************************************ ---------------------------------------------- PETSc Performance Summary: ---------------------------------------------- ./fsi3d on a linux-gnu named localhost.localdomain with 1 processor, by stuwyq Tue Apr 3 16:29:35 2012 Using Petsc Release Version 3.2.0, Patch 6, Wed Jan 11 09:28:45 CST 2012 Max Max/Min Avg Total Time (sec): 1.059e+02 1.00000 1.059e+02 Objects: 2.990e+02 1.00000 2.990e+02 Flops: 9.437e+09 1.00000 9.437e+09 9.437e+09 Flops/sec: 8.909e+07 1.00000 8.909e+07 8.909e+07 Memory: 2.663e+08 1.00000 2.663e+08 MPI Messages: 0.000e+00 0.00000 0.000e+00 0.000e+00 MPI Message Lengths: 0.000e+00 0.00000 0.000e+00 0.000e+00 MPI Reductions: 6.830e+02 1.00000 Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract) e.g., VecAXPY() for real vectors of length N --> 2N flops and VecAXPY() for complex vectors of length N --> 8N flops Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages --- -- Message Lengths -- -- Reductions -- Avg %Total Avg %Total counts %Total Avg %Total counts %Total 0: Main Stage: 1.0318e+02 97.4% 9.4372e+09 100.0% 0.000e+00 0.0% 0.000e+00 0.0% 6.780e+02 99.3% 1: Interpolation Stage: 2.7470e+00 2.6% 0.0000e+00 0.0% 0.000e+00 0.0% 0.000e+00 0.0% 4.000e+00 0.6% ------------------------------------------------------------------------------------------------------------------------ See the 'Profiling' chapter of the users' manual for details on interpreting output. Phase summary info: Count: number of times phase was executed Time and Flops: Max - maximum over all processors Ratio - ratio of maximum to minimum over all processors Mess: number of messages sent Avg. len: average message length Reduct: number of global reductions Global: entire computation Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop(). %T - percent time in this phase %f - percent flops in this phase %M - percent messages in this phase %L - percent message lengths in this phase %R - percent reductions in this phase Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors) ------------------------------------------------------------------------------------------------------------------------ ########################################################## # # # WARNING!!! # # # # This code was compiled with a debugging option, # # To get timing results run ./configure # # using --with-debugging=no, the performance will # # be generally two or three times faster. # # # ########################################################## Event Count Time (sec) Flops --- Global --- --- Stage --- Total Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %f %M %L %R %T %f %M %L %R Mflop/s ------------------------------------------------------------------------------------------------------------------------ --- Event Stage 0: Main Stage MatMult 30 1.0 8.5098e-02 1.0 2.44e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 287 MatMultAdd 3 1.0 2.3520e-03 1.0 7.55e+05 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 321 MatMultTranspose 8 1.0 1.7276e-02 1.0 3.34e+06 1.0 0.0e+00 0.0e+00 8.0e+00 0 0 0 0 1 0 0 0 0 1 193 MatSolve 20 1.0 5.5847e-01 1.0 2.02e+08 1.0 0.0e+00 0.0e+00 0.0e+00 1 2 0 0 0 1 2 0 0 0 361 MatLUFactorSym 3 1.0 1.4073e+00 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 1.5e+01 1 0 0 0 2 1 0 0 0 2 0 MatLUFactorNum 3 1.0 3.2754e+01 1.0 9.16e+09 1.0 0.0e+00 0.0e+00 0.0e+00 31 97 0 0 0 32 97 0 0 0 280 MatAssemblyBegin 17 1.0 4.8164e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatAssemblyEnd 17 1.0 7.8451e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatGetRowIJ 3 1.0 1.4249e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatGetSubMatrice 3 1.0 7.2378e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 1.2e+01 0 0 0 0 2 0 0 0 0 2 0 MatGetOrdering 3 1.0 6.5199e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 1.2e+01 0 0 0 0 2 0 0 0 0 2 0 MatPartitioning 1 1.0 3.6610e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 1.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatZeroEntries 10 1.0 6.7750e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatView 7 1.0 6.7371e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatPtAP 2 1.0 1.0625e+00 1.0 4.94e+07 1.0 0.0e+00 0.0e+00 5.0e+00 1 1 0 0 1 1 1 0 0 1 47 MatPtAPSymbolic 1 1.0 3.1505e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 4.0e+00 0 0 0 0 1 0 0 0 0 1 0 MatPtAPNumeric 2 1.0 7.4744e-01 1.0 4.94e+07 1.0 0.0e+00 0.0e+00 0.0e+00 1 1 0 0 0 1 1 0 0 0 66 MatGetSymTrans 1 1.0 2.8011e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecView 2 1.0 3.7433e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecDot 2 1.0 1.1689e-04 1.0 4.63e+04 1.0 0.0e+00 0.0e+00 2.0e+00 0 0 0 0 0 0 0 0 0 0 396 VecMDot 6 1.0 2.6295e-04 1.0 1.18e+05 1.0 0.0e+00 0.0e+00 3.0e+00 0 0 0 0 0 0 0 0 0 0 448 VecNorm 33 1.0 3.9821e-03 1.0 6.76e+05 1.0 0.0e+00 0.0e+00 1.6e+01 0 0 0 0 2 0 0 0 0 2 170 VecScale 11 1.0 3.3825e-04 1.0 8.30e+04 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 246 VecCopy 22 1.0 7.9994e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecSet 71 1.0 1.4726e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecAXPY 21 1.0 1.5402e-03 1.0 4.42e+05 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 287 VecAYPX 11 1.0 8.7677e-04 1.0 1.27e+05 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 145 VecWAXPY 2 1.0 1.6994e-04 1.0 2.32e+04 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 136 VecMAXPY 11 1.0 6.7021e-04 1.0 2.12e+05 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 317 VecAssemblyBegin 8 1.0 3.8900e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 2.4e+01 0 0 0 0 4 0 0 0 0 4 0 VecAssemblyEnd 8 1.0 4.2935e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecScatterBegin 92 1.0 1.3602e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecReduceArith 2 1.0 1.8400e-04 1.0 4.63e+04 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 252 VecReduceComm 1 1.0 1.8836e-06 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecNormalize 6 1.0 4.5392e-04 1.0 7.53e+04 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 166 SNESSolve 1 1.0 7.5674e+01 1.0 9.44e+09 1.0 0.0e+00 0.0e+00 3.4e+02 71100 0 0 49 73100 0 0 50 125 SNESLineSearch 2 1.0 4.7974e+00 1.0 4.95e+06 1.0 0.0e+00 0.0e+00 3.0e+01 5 0 0 0 4 5 0 0 0 4 1 SNESFunctionEval 3 1.0 7.2572e+00 1.0 4.43e+06 1.0 0.0e+00 0.0e+00 3.0e+01 7 0 0 0 4 7 0 0 0 4 1 SNESJacobianEval 2 1.0 3.2317e+01 1.0 5.11e+05 1.0 0.0e+00 0.0e+00 2.0e+00 31 0 0 0 0 31 0 0 0 0 0 KSPGMRESOrthog 6 1.0 7.5141e-04 1.0 2.36e+05 1.0 0.0e+00 0.0e+00 1.0e+01 0 0 0 0 1 0 0 0 0 1 314 KSPSetup 8 1.0 2.1188e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 2.0e+01 0 0 0 0 3 0 0 0 0 3 0 KSPSolve 2 1.0 3.6070e+01 1.0 9.43e+09 1.0 0.0e+00 0.0e+00 2.8e+02 34100 0 0 42 35100 0 0 42 261 PCSetUp 5 1.0 3.5411e+01 1.0 9.21e+09 1.0 0.0e+00 0.0e+00 1.0e+02 33 98 0 0 15 34 98 0 0 15 260 PCSetUpOnBlocks 9 1.0 3.4264e+01 1.0 9.16e+09 1.0 0.0e+00 0.0e+00 2.7e+01 32 97 0 0 4 33 97 0 0 4 267 PCApply 3 1.0 3.4907e+01 1.0 9.37e+09 1.0 0.0e+00 0.0e+00 1.9e+02 33 99 0 0 28 34 99 0 0 28 269 --- Event Stage 1: Interpolation Stage MatAssemblyBegin 1 1.0 2.0429e-06 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatAssemblyEnd 1 1.0 3.7680e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 ------------------------------------------------------------------------------------------------------------------------ Memory usage is given in bytes: Object Type Creations Destructions Memory Descendants' Mem. Reports information only for process 0. --- Event Stage 0: Main Stage Matrix 14 15 306511760 0 Matrix Partitioning 1 1 368 0 Index Set 125 125 1996040 0 IS L to G Mapping 2 2 126824 0 Vector 106 106 16827416 0 Vector Scatter 32 32 11648 0 Application Order 4 4 251152 0 Viewer 3 2 856 0 SNES 1 1 792 0 Krylov Solver 5 5 339764 0 Preconditioner 5 5 3088 0 --- Event Stage 1: Interpolation Stage Matrix 1 0 0 0 ======================================================================================================================== Average time to get PetscTime(): 1.00136e-06 #PETSc Option Table entries: -Lambda 1.73086e6 -Nplot 1 -Nsteps 1 -Pin -coarse_ksp_gmres_restart 100 -coarse_ksp_max_it 1000 -coarse_ksp_monitor -coarse_ksp_pc_side right -coarse_ksp_rtol 1.e-3 -coarse_ksp_type gmres -coarse_pc_type asm -coarse_sub_pc_type lu -coarsegrid /home/stuwyq/Grid3D/3Dfsi/3Dfsi.fsi -f /home/stuwyq/Grid3D/3Dfsi/3DfsiA.fsi -final_time 0.0001 -fine_sub_pc_type lu -geometric_asm -geometric_asm_overlap 1 -info -initial_time 0.0 -ksp_atol 1.e-14 -ksp_gmres_restart 100 -ksp_max_it 1000 -ksp_monitor -ksp_pc_side right -ksp_rtol 1.e-6 -ksp_type fgmres -laplace -log_summary -mat_partitioning_type parmetis -mg -mu 1.1538e6 -nest_ksp_pc_side right -nest_ksp_rtol 1.e-6 -nest_sub_pc_type lu -nobile -output 3DfsiA -scale_mu -snes_max_it 10 -snes_rtol 1.e-7 -snes_view -solid_density 1.2 -solution_output -timestep 1 -two_level -viscosity 0.03 #End of PETSc Option Table entries Compiled without FORTRAN kernels Compiled with full precision matrices (default) sizeof(short) 2 sizeof(int) 4 sizeof(long) 4 sizeof(void*) 4 sizeof(PetscScalar) 8 Configure run at: Fri Jan 20 12:46:13 2012 Configure options: --with-mpi-dir=/home/stuwyq/ProgramFiles/mpich2 --download-f-blas-lapack=1 --download-parmetis=1 --download-superlu=1 --download-superlu_dist=1 --download-scalapack=1 --download-blacs=1 ----------------------------------------- Libraries compiled on Fri Jan 20 12:46:13 2012 on localhost.localdomain Machine characteristics: Linux-2.6.30.10-105.2.23.fc11.i686.PAE-i686-with-fedora-11-Leonidas Using PETSc directory: /home/stuwyq/ProgramFiles/petsc-3.2-p6 Using PETSc arch: linux-gnu-c-debug ----------------------------------------- Using C compiler: /home/stuwyq/ProgramFiles/mpich2/bin/mpicc -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -g3 ${COPTFLAGS} ${CFLAGS} Using Fortran compiler: /home/stuwyq/ProgramFiles/mpich2/bin/mpif90 -Wall -Wno-unused-variable -g ${FOPTFLAGS} ${FFLAGS} ----------------------------------------- Using include paths: -I/home/stuwyq/ProgramFiles/petsc-3.2-p6/linux-gnu-c-debug/include -I/home/stuwyq/ProgramFiles/petsc-3.2-p6/include -I/home/stuwyq/ProgramFiles/petsc-3.2-p6/include -I/home/stuwyq/ProgramFiles/petsc-3.2-p6/linux-gnu-c-debug/include -I/home/stuwyq/ProgramFiles/mpich2/include ----------------------------------------- Using C linker: /home/stuwyq/ProgramFiles/mpich2/bin/mpicc Using Fortran linker: /home/stuwyq/ProgramFiles/mpich2/bin/mpif90 Using libraries: -Wl,-rpath,/home/stuwyq/ProgramFiles/petsc-3.2-p6/linux-gnu-c-debug/lib -L/home/stuwyq/ProgramFiles/petsc-3.2-p6/linux-gnu-c-debug/lib -lpetsc -lX11 -lpthread -Wl,-rpath,/home/stuwyq/ProgramFiles/petsc-3.2-p6/linux-gnu-c-debug/lib -L/home/stuwyq/ProgramFiles/petsc-3.2-p6/linux-gnu-c-debug/lib -lsuperlu_dist_2.5 -lparmetis -lmetis -lscalapack -lblacs -lsuperlu_4.2 -lflapack -lfblas -lm -L/usr/lib/gcc/i586-redhat-linux/4.4.1 -ldl -lgcc_s -lgfortran -lm -lm -ldl -lgcc_s -ldl ----------------------------------------- WARNING! There are options you set that were not used! WARNING! could be spelling mistake, etc! Option left: name:-nest_ksp_pc_side value: right Option left: name:-nest_ksp_rtol value: 1.e-6 Option left: name:-nest_sub_pc_type value: lu [0] Petsc_DelViewer(): Removing viewer data attribute in an MPI_Comm -2080374784 [0] Petsc_DelComm(): Removing reference to PETSc communicator imbedded in a user MPI_Comm m -2080374784 [0] Petsc_DelComm(): User MPI_Comm m 1140850688 is being freed, removing reference from inner PETSc comm to this outer comm [0] PetscCommDestroy(): Deleting PETSc MPI_Comm -2080374784 [0] Petsc_DelCounter(): Deleting counter data in an MPI_Comm -2080374784 [0] Petsc_DelViewer(): Removing viewer data attribute in an MPI_Comm -2080374784 [0] Petsc_DelReduction(): Deleting reduction data in an MPI_Comm -2080374784