[petsc-users] GAMG issue
John Mousel
john.mousel at gmail.com
Thu Mar 29 10:18:50 CDT 2012
Mark,
I'm working with a new matrix, which again converges with ML and HYPRE on 4
cores. I pulled petsc-dev this morning, and I'm getting
[0]PCSetFromOptions_GAMG threshold set 1.000000e-02
[0]PCSetUp_GAMG level 0 N=556240, n data rows=1, n data cols=1, nnz/row
(ave)=27, np=4
[0]PCGAMGFilterGraph 65.8687% nnz after filtering, with threshold 0.01,
27.0672 nnz ave.
[0]maxIndSetAgg removed 0 of 556240 vertices. (0 local) 20114 selected.
[0]PCGAMGProlongator_AGG New grid 20114 nodes
[1]PETSC ERROR: --------------------- Error Message
------------------------------------
[1]PETSC ERROR: Error in external library!
[0]PETSC ERROR: --------------------- Error Message
------------------------------------
[0]PETSC ERROR: Error in external library!
[0]PETSC ERROR: Cannot disable floating point exceptions!
[0]PETSC ERROR:
------------------------------------------------------------------------
[0]PETSC ERROR: Petsc Development HG revision:
7d8a276dc2a168a3596c060afce69a229eb409ea HG Date: Thu Mar 29 07:30:18 2012
-0500
[0]PETSC ERROR: [1]PETSC ERROR: Cannot disable floating point exceptions!
[1]PETSC ERROR:
------------------------------------------------------------------------
[1]PETSC ERROR: Petsc Development HG revision:
7d8a276dc2a168a3596c060afce69a229eb409ea HG Date: Thu Mar 29 07:30:18 2012
-0500
[1]PETSC ERROR: See docs/changes/index.html for recent updates.
[1]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
[1]PETSC ERROR: See docs/index.html for manual pages.
[2]PETSC ERROR: --------------------- Error Message
------------------------------------
[2]PETSC ERROR: Error in external library!
[3]PETSC ERROR: --------------------- Error Message
------------------------------------
[3]PETSC ERROR: Error in external library!
[3]PETSC ERROR: Cannot disable floating point exceptions!
See docs/changes/index.html for recent updates.
[0]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
[0]PETSC ERROR: See docs/index.html for manual pages.
[0]PETSC ERROR:
------------------------------------------------------------------------
[0]PETSC ERROR: ../JohnRepo/VFOLD_exe on a linux-deb named
wv.iihr.uiowa.eduby jmousel Thu Mar 29 10:03:45 2012
[0]PETSC ERROR: Libraries linked from
/home/jmousel/NumericalLibraries/petsc-hg/petsc-dev/linux-debug/lib
[0]PETSC ERROR: Configure run at Thu Mar 29 09:29:37 2012
[0]PETSC ERROR: Configure options --download-blacs=1 --download-hypre=1
--download-metis=1 --download-ml=1 --download-mpich=1 --download-parmetis=1
--download-scalapack=1 --with-blas-lapack-dir=/opt/intel11/mkl/lib/em64t
--with-cc=gcc --with-cmake=/usr/local/bin/cmake --with-cxx=g++
--with-fc=ifort PETSC_ARCH=linux-debug
[1]PETSC ERROR:
------------------------------------------------------------------------
[1]PETSC ERROR: ../JohnRepo/VFOLD_exe on a linux-deb named
wv.iihr.uiowa.eduby jmousel Thu Mar 29 10:03:45 2012
[1]PETSC ERROR: Libraries linked from
/home/jmousel/NumericalLibraries/petsc-hg/petsc-dev/linux-debug/lib
[1]PETSC ERROR: Configure run at Thu Mar 29 09:29:37 2012
[1]PETSC ERROR: [2]PETSC ERROR: Cannot disable floating point exceptions!
[2]PETSC ERROR:
------------------------------------------------------------------------
[2]PETSC ERROR: Petsc Development HG revision:
7d8a276dc2a168a3596c060afce69a229eb409ea HG Date: Thu Mar 29 07:30:18 2012
-0500
[2]PETSC ERROR: See docs/changes/index.html for recent updates.
[2]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
[2]PETSC ERROR: [3]PETSC ERROR:
------------------------------------------------------------------------
[3]PETSC ERROR: Petsc Development HG revision:
7d8a276dc2a168a3596c060afce69a229eb409ea HG Date: Thu Mar 29 07:30:18 2012
-0500
[3]PETSC ERROR: See docs/changes/index.html for recent updates.
[3]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
[3]PETSC ERROR: See docs/index.html for manual pages.
[3]PETSC ERROR: [0]PETSC ERROR:
------------------------------------------------------------------------
[0]PETSC ERROR: PetscSetFPTrap() line 465 in
/home/jmousel/NumericalLibraries/petsc-hg/petsc-dev/src/sys/error/fp.c
[0]PETSC ERROR: PetscFPTrapPush() line 56 in
/home/jmousel/NumericalLibraries/petsc-hg/petsc-dev/src/sys/error/fp.c
[0]PETSC ERROR: KSPComputeExtremeSingularValues_GMRES() line 42 in
/home/jmousel/NumericalLibraries/petsc-hg/petsc-dev/src/ksp/ksp/impls/gmres/gmreig.c
[0]PETSC ERROR: KSPComputeExtremeSingularValues() line 47 in
/home/jmousel/NumericalLibraries/petsc-hg/petsc-dev/src/ksp/ksp/interface/itfunc.c
Configure options --download-blacs=1 --download-hypre=1 --download-metis=1
--download-ml=1 --download-mpich=1 --download-parmetis=1
--download-scalapack=1 --with-blas-lapack-dir=/opt/intel11/mkl/lib/em64t
--with-cc=gcc --with-cmake=/usr/local/bin/cmake --with-cxx=g++
--with-fc=ifort PETSC_ARCH=linux-debug
[1]PETSC ERROR:
------------------------------------------------------------------------
[1]PETSC ERROR: PetscSetFPTrap() line 465 in
/home/jmousel/NumericalLibraries/petsc-hg/petsc-dev/src/sys/error/fp.c
See docs/index.html for manual pages.
[2]PETSC ERROR:
------------------------------------------------------------------------
[2]PETSC ERROR: ../JohnRepo/VFOLD_exe on a linux-deb named
wv.iihr.uiowa.eduby jmousel Thu Mar 29 10:03:45 2012
[2]PETSC ERROR: Libraries linked from
/home/jmousel/NumericalLibraries/petsc-hg/petsc-dev/linux-debug/lib
[2]PETSC ERROR:
------------------------------------------------------------------------
[3]PETSC ERROR: ../JohnRepo/VFOLD_exe on a linux-deb named
wv.iihr.uiowa.eduby jmousel Thu Mar 29 10:03:45 2012
[3]PETSC ERROR: Libraries linked from
/home/jmousel/NumericalLibraries/petsc-hg/petsc-dev/linux-debug/lib
[3]PETSC ERROR: Configure run at Thu Mar 29 09:29:37 2012
[3]PETSC ERROR: [0]PETSC ERROR: PCGAMGOptprol_AGG() line 1293 in
/home/jmousel/NumericalLibraries/petsc-hg/petsc-dev/src/ksp/pc/impls/gamg/agg.c
[0]PETSC ERROR: PCSetUp_GAMG() line 545 in
/home/jmousel/NumericalLibraries/petsc-hg/petsc-dev/src/ksp/pc/impls/gamg/gamg.c
[0]PETSC ERROR: PCSetUp() line 832 in
/home/jmousel/NumericalLibraries/petsc-hg/petsc-dev/src/ksp/pc/interface/precon.c
[0]PETSC ERROR: KSPSetUp() line 266 in
/home/jmousel/NumericalLibraries/petsc-hg/petsc-dev/src/ksp/ksp/interface/itfunc.c
Configure run at Thu Mar 29 09:29:37 2012
[2]PETSC ERROR: Configure options --download-blacs=1 --download-hypre=1
--download-metis=1 --download-ml=1 --download-mpich=1 --download-parmetis=1
--download-scalapack=1 --with-blas-lapack-dir=/opt/intel11/mkl/lib/em64t
--with-cc=gcc --with-cmake=/usr/local/bin/cmake --with-cxx=g++
--with-fc=ifort PETSC_ARCH=linux-debug
[2]PETSC ERROR:
------------------------------------------------------------------------
[2]PETSC ERROR: PetscSetFPTrap() line 465 in
/home/jmousel/NumericalLibraries/petsc-hg/petsc-dev/src/sys/error/fp.c
Configure options --download-blacs=1 --download-hypre=1 --download-metis=1
--download-ml=1 --download-mpich=1 --download-parmetis=1
--download-scalapack=1 --with-blas-lapack-dir=/opt/intel11/mkl/lib/em64t
--with-cc=gcc --with-cmake=/usr/local/bin/cmake --with-cxx=g++
--with-fc=ifort PETSC_ARCH=linux-debug
[3]PETSC ERROR:
------------------------------------------------------------------------
[3]PETSC ERROR: PetscSetFPTrap() line 465 in
/home/jmousel/NumericalLibraries/petsc-hg/petsc-dev/src/sys/error/fp.c
[3]PETSC ERROR: PetscFPTrapPush() line 56 in
/home/jmousel/NumericalLibraries/petsc-hg/petsc-dev/src/sys/error/fp.c
[0]PETSC ERROR: KSPSolve() line 390 in
/home/jmousel/NumericalLibraries/petsc-hg/petsc-dev/src/ksp/ksp/interface/itfunc.c
[1]PETSC ERROR: PetscFPTrapPush() line 56 in
/home/jmousel/NumericalLibraries/petsc-hg/petsc-dev/src/sys/error/fp.c
[1]PETSC ERROR: KSPComputeExtremeSingularValues_GMRES() line 42 in
/home/jmousel/NumericalLibraries/petsc-hg/petsc-dev/src/ksp/ksp/impls/gmres/gmreig.c
[1]PETSC ERROR: KSPComputeExtremeSingularValues() line 47 in
/home/jmousel/NumericalLibraries/petsc-hg/petsc-dev/src/ksp/ksp/interface/itfunc.c
[1]PETSC ERROR: [2]PETSC ERROR: PetscFPTrapPush() line 56 in
/home/jmousel/NumericalLibraries/petsc-hg/petsc-dev/src/sys/error/fp.c
[2]PETSC ERROR: KSPComputeExtremeSingularValues_GMRES() line 42 in
/home/jmousel/NumericalLibraries/petsc-hg/petsc-dev/src/ksp/ksp/impls/gmres/gmreig.c
[2]PETSC ERROR: KSPComputeExtremeSingularValues() line 47 in
/home/jmousel/NumericalLibraries/petsc-hg/petsc-dev/src/ksp/ksp/interface/itfunc.c
[2]PETSC ERROR: [3]PETSC ERROR: KSPComputeExtremeSingularValues_GMRES()
line 42 in
/home/jmousel/NumericalLibraries/petsc-hg/petsc-dev/src/ksp/ksp/impls/gmres/gmreig.c
[3]PETSC ERROR: KSPComputeExtremeSingularValues() line 47 in
/home/jmousel/NumericalLibraries/petsc-hg/petsc-dev/src/ksp/ksp/interface/itfunc.c
[3]PETSC ERROR: PCGAMGOptprol_AGG() line 1293 in
/home/jmousel/NumericalLibraries/petsc-hg/petsc-dev/src/ksp/pc/impls/gamg/agg.c
PCGAMGOptprol_AGG() line 1293 in
/home/jmousel/NumericalLibraries/petsc-hg/petsc-dev/src/ksp/pc/impls/gamg/agg.c
[1]PETSC ERROR: PCSetUp_GAMG() line 545 in
/home/jmousel/NumericalLibraries/petsc-hg/petsc-dev/src/ksp/pc/impls/gamg/gamg.c
[1]PETSC ERROR: PCGAMGOptprol_AGG() line 1293 in
/home/jmousel/NumericalLibraries/petsc-hg/petsc-dev/src/ksp/pc/impls/gamg/agg.c
[2]PETSC ERROR: PCSetUp_GAMG() line 545 in
/home/jmousel/NumericalLibraries/petsc-hg/petsc-dev/src/ksp/pc/impls/gamg/gamg.c
[2]PETSC ERROR: PCSetUp() line 832 in
/home/jmousel/NumericalLibraries/petsc-hg/petsc-dev/src/ksp/pc/interface/precon.c
[3]PETSC ERROR: PCSetUp_GAMG() line 545 in
/home/jmousel/NumericalLibraries/petsc-hg/petsc-dev/src/ksp/pc/impls/gamg/gamg.c
[3]PETSC ERROR: PCSetUp() line 832 in
/home/jmousel/NumericalLibraries/petsc-hg/petsc-dev/src/ksp/pc/interface/precon.c
[3]PETSC ERROR: PCSetUp() line 832 in
/home/jmousel/NumericalLibraries/petsc-hg/petsc-dev/src/ksp/pc/interface/precon.c
[1]PETSC ERROR: KSPSetUp() line 266 in
/home/jmousel/NumericalLibraries/petsc-hg/petsc-dev/src/ksp/ksp/interface/itfunc.c
[1]PETSC ERROR: KSPSolve() line 390 in
/home/jmousel/NumericalLibraries/petsc-hg/petsc-dev/src/ksp/ksp/interface/itfunc.c
[2]PETSC ERROR: KSPSetUp() line 266 in
/home/jmousel/NumericalLibraries/petsc-hg/petsc-dev/src/ksp/ksp/interface/itfunc.c
[2]PETSC ERROR: KSPSolve() line 390 in
/home/jmousel/NumericalLibraries/petsc-hg/petsc-dev/src/ksp/ksp/interface/itfunc.c
KSPSetUp() line 266 in
/home/jmousel/NumericalLibraries/petsc-hg/petsc-dev/src/ksp/ksp/interface/itfunc.c
[3]PETSC ERROR: KSPSolve() line 390 in
/home/jmousel/NumericalLibraries/petsc-hg/petsc-dev/src/ksp/ksp/interface/itfunc.c
I'm using the options:
-pc_type gamg -ksp_type bcgsl -pc_gamg_coarse_eq_limit 10
-pc_gamg_agg_nsmooths 1 -pc_gamg_sym_graph -mg_coarse_ksp_type richardson
-mg_coarse_pc_type sor -mg_coarse_pc_sor_its 8 -ksp_monitor_true_residual
-pc_gamg_verbose 2 -ksp_converged_reason -options_left -mg_levels_ksp_type
richardson -mg_levels_pc_type sor -pc_gamg_threshold 0.01
-pc_gamg_repartition
John
On Tue, Mar 20, 2012 at 3:02 PM, Mark F. Adams <mark.adams at columbia.edu>wrote:
>
> On Mar 20, 2012, at 3:39 PM, John Mousel wrote:
>
> Mark,
>
> I am using petsc-dev that I pulled after you made the changes for the
> non-symmetric discretization allocations last week.
> I think the difference in our results comes from different convergence
> tolerances. I'm using an rtol of 1.D-012. It seems to be converging very
> nicely now.
>
>
> Good,
>
> I think I dropped the option to set ksp and pc on levels after a bit, and
> that seems to have made the difference. GAMG should scale much better than
> HYPRE and ML right?
> They both seem to work efficiently for really small core counts, but
> deteriorate with impressive speed as you go up the ladder.
>
>
> ML, should scale pretty similar to GAMG. The drop tolerance can effect
> complexity but if these are about the same the ML interface uses PETSc
> infrastructure.
>
> HYPRE is its own solver and its has been optimized for scalability. You
> do have to watch for complexity getting out of hand with all AMG solvers
> but they are more or less good scalable codes.
>
> Mark
>
>
> [0]PCSetUp_GAMG level 0 N=46330, n data rows=1, n data cols=1, nnz/row
> (ave)=6, np=4
> [0]scaleFilterGraph 75.5527% nnz after filtering, with threshold 0.05,
> 6.95957 nnz ave.
> [0]maxIndSetAgg removed 0 of 46330 vertices. (0 local)
> [0]PCGAMGprolongator_AGG New grid 5903 nodes
> PCGAMGoptprol_AGG smooth P0: max eigen=1.923098e+00
> min=3.858220e-02 PC=jacobi
> [0]PCSetUp_GAMG 1) N=5903, n data cols=1, nnz/row (ave)=13, 4
> active pes
> [0]scaleFilterGraph 52.8421% nnz after filtering, with threshold 0.05,
> 13.3249 nnz ave.
> [0]maxIndSetAgg removed 0 of 5903 vertices. (0 local)
> [0]PCGAMGprolongator_AGG New grid 615 nodes
> PCGAMGoptprol_AGG smooth P0: max eigen=1.575363e+00
> min=2.167886e-03 PC=jacobi
> [0]PCSetUp_GAMG 2) N=615, n data cols=1, nnz/row (ave)=21, 4
> active pes
> [0]scaleFilterGraph 24.7174% nnz after filtering, with threshold 0.05,
> 21.722 nnz ave.
> [0]maxIndSetAgg removed 0 of 615 vertices. (0 local)
> [0]PCGAMGprolongator_AGG New grid 91 nodes
> PCGAMGoptprol_AGG smooth P0: max eigen=1.676442e+00
> min=2.270745e-03 PC=jacobi
> [0]createLevel aggregate processors: npe: 4 --> 1, neq=91
> [0]PCSetUp_GAMG 3) N=91, n data cols=1, nnz/row (ave)=37, 1 active
> pes
> [0]scaleFilterGraph 16.4384% nnz after filtering, with threshold 0.05,
> 37.7033 nnz ave.
> [0]maxIndSetAgg removed 0 of 91 vertices. (0 local)
> [0]PCGAMGprolongator_AGG New grid 10 nodes
> PCGAMGoptprol_AGG smooth P0: max eigen=1.538313e+00
> min=8.923063e-04 PC=jacobi
> [0]PCSetUp_GAMG 4) N=10, n data cols=1, nnz/row (ave)=10, 1 active
> pes
> [0]PCSetUp_GAMG 5 levels, grid compexity = 1.29633
> Residual norms for pres_ solve.
> 0 KSP preconditioned resid norm 4.680688832182e+06 true resid norm
> 2.621342052504e+03 ||r(i)||/||b|| 1.000000000000e+00
> 2 KSP preconditioned resid norm 1.728993898497e+04 true resid norm
> 2.888375221014e+03 ||r(i)||/||b|| 1.101868876004e+00
> 4 KSP preconditioned resid norm 4.510102902646e+02 true resid norm
> 5.677727287161e+01 ||r(i)||/||b|| 2.165962004744e-02
> 6 KSP preconditioned resid norm 3.959846836748e+01 true resid norm
> 1.973580779699e+00 ||r(i)||/||b|| 7.528894513455e-04
> 8 KSP preconditioned resid norm 3.175473803927e-01 true resid norm
> 4.315977395174e-02 ||r(i)||/||b|| 1.646476235732e-05
> 10 KSP preconditioned resid norm 7.502408552205e-04 true resid norm
> 1.016040400933e-04 ||r(i)||/||b|| 3.876031363257e-08
> 12 KSP preconditioned resid norm 2.868067261023e-06 true resid norm
> 1.194542164810e-06 ||r(i)||/||b|| 4.556986997056e-10
> KSP Object:(pres_) 4 MPI processes
> type: bcgsl
> BCGSL: Ell = 2
> BCGSL: Delta = 0
> maximum iterations=5000
> tolerances: relative=1e-12, absolute=1e-50, divergence=10000
> left preconditioning
> has attached null space
> using nonzero initial guess
> using PRECONDITIONED norm type for convergence test
> PC Object:(pres_) 4 MPI processes
> type: gamg
> MG: type is MULTIPLICATIVE, levels=5 cycles=v
> Cycles per PCApply=1
> Using Galerkin computed coarse grid matrices
> Coarse grid solver -- level -------------------------------
> KSP Object: (pres_mg_coarse_) 4 MPI processes
> type: richardson
> Richardson: damping factor=1
> maximum iterations=1, initial guess is zero
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000
> left preconditioning
> using NONE norm type for convergence test
> PC Object: (pres_mg_coarse_) 4 MPI processes
> type: sor
> SOR: type = local_symmetric, iterations = 8, local iterations = 1,
> omega = 1
> linear system matrix = precond matrix:
> Matrix Object: 4 MPI processes
> type: mpiaij
> rows=10, cols=10
> total: nonzeros=100, allocated nonzeros=100
> total number of mallocs used during MatSetValues calls =0
> using I-node (on process 0) routines: found 2 nodes, limit used
> is 5
> Down solver (pre-smoother) on level 1 -------------------------------
> KSP Object: (pres_mg_levels_1_) 4 MPI processes
> type: richardson
> Richardson: damping factor=1
> maximum iterations=1
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000
> left preconditioning
> using nonzero initial guess
> using NONE norm type for convergence test
> PC Object: (pres_mg_levels_1_) 4 MPI processes
> type: sor
> SOR: type = local_symmetric, iterations = 1, local iterations = 1,
> omega = 1
> linear system matrix = precond matrix:
> Matrix Object: 4 MPI processes
> type: mpiaij
> rows=91, cols=91
> total: nonzeros=3431, allocated nonzeros=3431
> total number of mallocs used during MatSetValues calls =0
> not using I-node (on process 0) routines
> Up solver (post-smoother) same as down solver (pre-smoother)
> Down solver (pre-smoother) on level 2 -------------------------------
> KSP Object: (pres_mg_levels_2_) 4 MPI processes
> type: richardson
> Richardson: damping factor=1
> maximum iterations=1
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000
> left preconditioning
> using nonzero initial guess
> using NONE norm type for convergence test
> PC Object: (pres_mg_levels_2_) 4 MPI processes
> type: sor
> SOR: type = local_symmetric, iterations = 1, local iterations = 1,
> omega = 1
> linear system matrix = precond matrix:
> Matrix Object: 4 MPI processes
> type: mpiaij
> rows=615, cols=615
> total: nonzeros=13359, allocated nonzeros=13359
> total number of mallocs used during MatSetValues calls =0
> not using I-node (on process 0) routines
> Up solver (post-smoother) same as down solver (pre-smoother)
> Down solver (pre-smoother) on level 3 -------------------------------
> KSP Object: (pres_mg_levels_3_) 4 MPI processes
> type: richardson
> Richardson: damping factor=1
> maximum iterations=1
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000
> left preconditioning
> using nonzero initial guess
> using NONE norm type for convergence test
> PC Object: (pres_mg_levels_3_) 4 MPI processes
> type: sor
> SOR: type = local_symmetric, iterations = 1, local iterations = 1,
> omega = 1
> linear system matrix = precond matrix:
> Matrix Object: 4 MPI processes
> type: mpiaij
> rows=5903, cols=5903
> total: nonzeros=78657, allocated nonzeros=78657
> total number of mallocs used during MatSetValues calls =0
> not using I-node (on process 0) routines
> Up solver (post-smoother) same as down solver (pre-smoother)
> Down solver (pre-smoother) on level 4 -------------------------------
> KSP Object: (pres_mg_levels_4_) 4 MPI processes
> type: richardson
> Richardson: damping factor=1
> maximum iterations=1
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000
> left preconditioning
> has attached null space
> using nonzero initial guess
> using NONE norm type for convergence test
> PC Object: (pres_mg_levels_4_) 4 MPI processes
> type: sor
> SOR: type = local_symmetric, iterations = 1, local iterations = 1,
> omega = 1
> linear system matrix = precond matrix:
> Matrix Object: 4 MPI processes
> type: mpiaij
> rows=46330, cols=46330
> total: nonzeros=322437, allocated nonzeros=615417
> total number of mallocs used during MatSetValues calls =0
> not using I-node (on process 0) routines
> Up solver (post-smoother) same as down solver (pre-smoother)
> linear system matrix = precond matrix:
> Matrix Object: 4 MPI processes
> type: mpiaij
> rows=46330, cols=46330
> total: nonzeros=322437, allocated nonzeros=615417
> total number of mallocs used during MatSetValues calls =0
> not using I-node (on process 0) routines
>
>
>
>
> On Tue, Mar 20, 2012 at 2:21 PM, Mark F. Adams <mark.adams at columbia.edu>wrote:
>
>> John,
>>
>> I had dome diagonal scaling stuff in my input which seemed to mess things
>> up. I don't understand that. With your hypre parameters I get
>>
>> Complexity: grid = 1.408828
>> operator = 1.638900
>> cycle = 3.277856
>>
>> 0 KSP preconditioned resid norm 2.246209947341e+06 true resid norm
>> 2.621342052504e+03 ||r(i)||/||b|| 1.000000000000e+00
>> 2 KSP preconditioned resid norm 6.591054866442e+04 true resid norm
>> 5.518411654910e+03 ||r(i)||/||b|| 2.105185643224e+00
>> 4 KSP preconditioned resid norm 2.721184454964e+03 true resid norm
>> 1.937153214559e+03 ||r(i)||/||b|| 7.389929188022e-01
>> 6 KSP preconditioned resid norm 2.942012838854e+02 true resid norm
>> 5.614763956317e+01 ||r(i)||/||b|| 2.141942502679e-02
>> 8 KSP preconditioned resid norm 2.143421596353e+01 true resid norm
>> 5.306843482279e+00 ||r(i)||/||b|| 2.024475774617e-03
>> 10 KSP preconditioned resid norm 3.689048280659e+00 true resid norm
>> 2.482945300243e-01 ||r(i)||/||b|| 9.472038560826e-05
>> Linear solve converged due to CONVERGED_RTOL iterations 10
>>
>> with ML I get 18 iterations but if I add -pc_ml_Threshold .01 I get it to
>> 12:
>>
>> -@${MPIEXEC} -n 1 ./ex10 -f ./binaryoutput -pc_type ml -ksp_type bcgsl
>> -pc_gamg_coarse_eq_limit 10 -pc_gamg_agg_nsmooths 1 -pc_gamg_sym_graph
>> -mg_coarse_ksp_type richardson -mg_coarse_pc_type sor -mg_coarse_pc_sor_its
>> 8 -ksp_monitor_true_residual -pc_gamg_verbose 2 -ksp_converged_reason
>> -options_left -mg_levels_ksp_type richardson -mg_levels_pc_type sor
>> -pc_ml_maxNlevels 5 -pc_ml_Threshold .01
>>
>> 0 KSP preconditioned resid norm 1.987800354481e+06 true resid norm
>> 2.621342052504e+03 ||r(i)||/||b|| 1.000000000000e+00
>> 2 KSP preconditioned resid norm 4.845840795806e+04 true resid norm
>> 9.664923970856e+03 ||r(i)||/||b|| 3.687013666005e+00
>> 4 KSP preconditioned resid norm 4.086337251141e+03 true resid norm
>> 1.111442892542e+03 ||r(i)||/||b|| 4.239976585582e-01
>> 6 KSP preconditioned resid norm 1.496117919395e+03 true resid norm
>> 4.243682354730e+02 ||r(i)||/||b|| 1.618896835946e-01
>> 8 KSP preconditioned resid norm 1.019912311314e+02 true resid norm
>> 6.165476121107e+01 ||r(i)||/||b|| 2.352030371320e-02
>> 10 KSP preconditioned resid norm 1.282179114927e+01 true resid norm
>> 4.434755525096e+00 ||r(i)||/||b|| 1.691788189512e-03
>> 12 KSP preconditioned resid norm 2.801790417375e+00 true resid norm
>> 4.876299030996e-01 ||r(i)||/||b|| 1.860229963632e-04
>> Linear solve converged due to CONVERGED_RTOL iterations 12
>>
>> and gamg:
>>
>> -@${MPIEXEC} -n 1 ./ex10 -f ./binaryoutput -pc_type gamg -ksp_type bcgsl
>> -pc_gamg_coarse_eq_limit 10 -pc_gamg_agg_nsmooths 1 -pc_gamg_sym_graph
>> -mg_coarse_ksp_type richardson -mg_coarse_pc_type sor -mg_coarse_pc_sor_its
>> 8 -ksp_monitor_true_residual -pc_gamg_verbose 2 -ksp_converged_reason
>> -options_left -mg_levels_ksp_type richardson -mg_levels_pc_type sor
>>
>> [0]PCSetUp_GAMG 5 levels, grid compexity = 1.2916
>> 0 KSP preconditioned resid norm 6.288750978813e+06 true resid norm
>> 2.621342052504e+03 ||r(i)||/||b|| 1.000000000000e+00
>> 2 KSP preconditioned resid norm 3.009668424006e+04 true resid norm
>> 4.394363256786e+02 ||r(i)||/||b|| 1.676379186222e-01
>> 4 KSP preconditioned resid norm 2.079756553216e+01 true resid norm
>> 5.094584609440e+00 ||r(i)||/||b|| 1.943502414946e-03
>> 6 KSP preconditioned resid norm 4.323447593442e+00 true resid norm
>> 3.146656048880e-01 ||r(i)||/||b|| 1.200398874261e-04
>> Linear solve converged due to CONVERGED_RTOL iterations 6
>>
>> So this looks pretty different from what you are getting. Is your code
>> hardwiring anything? Can you reproduce my results with ksp ex10.c?
>>
>> Actually, I just realized that I am using petsc-dev. What version of
>> PETSc are you using?
>>
>> Also, here is the makefile that I use to run this jobs:
>>
>> ALL: runex10
>>
>> include ${PETSC_DIR}/conf/variables
>> include ${PETSC_DIR}/conf/rules
>>
>> runex10:
>> -@${MPIEXEC} -n 1 ./ex10 -f ./binaryoutput -pc_type gamg -ksp_type bcgsl
>> -pc_gamg_coarse_eq_limit 10 -pc_gamg_agg_nsmooths 1 -pc_gamg_sym_graph
>> -mg_coarse_ksp_type richardson -mg_coarse_pc_type sor -mg_coarse_pc_sor_its
>> 8 -ksp_monitor_true_residual -pc_gamg_verbose 2 -ksp_converged_reason
>> -options_left -mg_levels_ksp_type richardson -mg_levels_pc_type sor
>> -pc_ml_maxNlevels 5 -pc_ml_Threshold .01
>> -pc_hypre_boomeramg_relax_type_coarse symmetric-SOR/Jacobi
>> -pc_hypre_boomeramg_grid_sweeps_coarse 4 -pc_hypre_boomeramg_coarsen_type
>> PMIS
>>
>> You just need to run 'make ex10' and then 'make -f this-file'.
>>
>> Mark
>>
>> On Mar 20, 2012, at 2:45 PM, John Mousel wrote:
>>
>> Mark,
>>
>> I run ML with the following options.
>>
>> -ksp_type bcgsl -pc_type ml -pc_ml_maxNlevels 5 -mg_coarse_ksp_type
>> richardson -mg_coarse_pc_type sor -mg_coarse_pc_sor_its 8 -ksp_monitor
>> -ksp_view
>>
>> Note the lack of scaling. For some reason scaling seems to mess with ML.
>> As you can see below, ML converges very nicely.
>>
>> With regards to HYPRE, this one took a bit of work to get convergence.
>> The options that work are:
>>
>> -ksp_type bcgsl -pc_type hypre -pc_hypre_type boomeramg
>> -ksp_monitor_true_residual -pc_hypre_boomeramg_relax_type_coarse
>> symmetric-SOR/Jacobi -pc_hypre_boomeramg_grid_sweeps_coarse 4
>> -pc_hypre_boomeramg_coarsen_type PMIS
>>
>> The problem is that neither of ML or HYPRE seem to scale at all.
>>
>> ML output:
>> 0 KSP preconditioned resid norm 1.538968715109e+06 true resid norm
>> 2.621342052504e+03 ||r(i)||/||b|| 1.000000000000e+00
>> 2 KSP preconditioned resid norm 1.263129058693e+05 true resid norm
>> 1.096298699671e+04 ||r(i)||/||b|| 4.182203915830e+00
>> 4 KSP preconditioned resid norm 2.277379585186e+04 true resid norm
>> 2.999721659930e+03 ||r(i)||/||b|| 1.144345758717e+00
>> 6 KSP preconditioned resid norm 4.766504457975e+03 true resid norm
>> 6.733421603796e+02 ||r(i)||/||b|| 2.568692474667e-01
>> 8 KSP preconditioned resid norm 2.139020425406e+03 true resid norm
>> 1.360842101250e+02 ||r(i)||/||b|| 5.191394613876e-02
>> 10 KSP preconditioned resid norm 6.621380459944e+02 true resid norm
>> 1.522758800025e+02 ||r(i)||/||b|| 5.809080881188e-02
>> 12 KSP preconditioned resid norm 2.973409610262e+01 true resid norm
>> 1.161046206089e+01 ||r(i)||/||b|| 4.429205280479e-03
>> 14 KSP preconditioned resid norm 2.532665482573e+00 true resid norm
>> 2.557425874623e+00 ||r(i)||/||b|| 9.756170020543e-04
>> 16 KSP preconditioned resid norm 2.375585214826e+00 true resid norm
>> 2.441783841415e+00 ||r(i)||/||b|| 9.315014189327e-04
>> 18 KSP preconditioned resid norm 1.436338060675e-02 true resid norm
>> 1.305304828818e-02 ||r(i)||/||b|| 4.979528816437e-06
>> 20 KSP preconditioned resid norm 4.088293864561e-03 true resid norm
>> 9.841243465634e-04 ||r(i)||/||b|| 3.754276728683e-07
>> 22 KSP preconditioned resid norm 6.140822977383e-04 true resid norm
>> 1.682184150207e-04 ||r(i)||/||b|| 6.417263052718e-08
>> 24 KSP preconditioned resid norm 2.685415483430e-05 true resid norm
>> 1.065041542336e-05 ||r(i)||/||b|| 4.062962867890e-09
>> 26 KSP preconditioned resid norm 1.620776166579e-06 true resid norm
>> 9.563268703474e-07 ||r(i)||/||b|| 3.648233809982e-10
>> 28 KSP preconditioned resid norm 2.823291105652e-07 true resid norm
>> 7.705418741178e-08 ||r(i)||/||b|| 2.939493811507e-11
>> KSP Object:(pres_) 4 MPI processes
>> type: bcgsl
>> BCGSL: Ell = 2
>> BCGSL: Delta = 0
>> maximum iterations=5000
>> tolerances: relative=1e-12, absolute=1e-50, divergence=10000
>> left preconditioning
>> has attached null space
>> using nonzero initial guess
>> using PRECONDITIONED norm type for convergence test
>> PC Object:(pres_) 4 MPI processes
>> type: ml
>> MG: type is MULTIPLICATIVE, levels=5 cycles=v
>> Cycles per PCApply=1
>> Using Galerkin computed coarse grid matrices
>> Coarse grid solver -- level -------------------------------
>> KSP Object: (pres_mg_coarse_) 4 MPI processes
>> type: richardson
>> Richardson: damping factor=1
>> maximum iterations=1, initial guess is zero
>> tolerances: relative=1e-05, absolute=1e-50, divergence=10000
>> left preconditioning
>> using PRECONDITIONED norm type for convergence test
>> PC Object: (pres_mg_coarse_) 4 MPI processes
>> type: sor
>> SOR: type = local_symmetric, iterations = 8, local iterations =
>> 1, omega = 1
>> linear system matrix = precond matrix:
>> Matrix Object: 4 MPI processes
>> type: mpiaij
>> rows=4, cols=4
>> total: nonzeros=16, allocated nonzeros=16
>> total number of mallocs used during MatSetValues calls =0
>> not using I-node (on process 0) routines
>> Down solver (pre-smoother) on level 1 -------------------------------
>> KSP Object: (pres_mg_levels_1_) 4 MPI processes
>> type: richardson
>> Richardson: damping factor=1
>> maximum iterations=1
>> tolerances: relative=1e-05, absolute=1e-50, divergence=10000
>> left preconditioning
>> using nonzero initial guess
>> using PRECONDITIONED norm type for convergence test
>> PC Object: (pres_mg_levels_1_) 4 MPI processes
>> type: sor
>> SOR: type = local_symmetric, iterations = 1, local iterations =
>> 1, omega = 1
>> linear system matrix = precond matrix:
>> Matrix Object: 4 MPI processes
>> type: mpiaij
>> rows=25, cols=25
>> total: nonzeros=303, allocated nonzeros=303
>> total number of mallocs used during MatSetValues calls =0
>> using I-node (on process 0) routines: found 4 nodes, limit used
>> is 5
>> Up solver (post-smoother) same as down solver (pre-smoother)
>> Down solver (pre-smoother) on level 2 -------------------------------
>> KSP Object: (pres_mg_levels_2_) 4 MPI processes
>> type: richardson
>> Richardson: damping factor=1
>> maximum iterations=1
>> tolerances: relative=1e-05, absolute=1e-50, divergence=10000
>> left preconditioning
>> using nonzero initial guess
>> using PRECONDITIONED norm type for convergence test
>> PC Object: (pres_mg_levels_2_) 4 MPI processes
>> type: sor
>> SOR: type = local_symmetric, iterations = 1, local iterations =
>> 1, omega = 1
>> linear system matrix = precond matrix:
>> Matrix Object: 4 MPI processes
>> type: mpiaij
>> rows=423, cols=423
>> total: nonzeros=7437, allocated nonzeros=7437
>> total number of mallocs used during MatSetValues calls =0
>> not using I-node (on process 0) routines
>> Up solver (post-smoother) same as down solver (pre-smoother)
>> Down solver (pre-smoother) on level 3 -------------------------------
>> KSP Object: (pres_mg_levels_3_) 4 MPI processes
>> type: richardson
>> Richardson: damping factor=1
>> maximum iterations=1
>> tolerances: relative=1e-05, absolute=1e-50, divergence=10000
>> left preconditioning
>> using nonzero initial guess
>> using PRECONDITIONED norm type for convergence test
>> PC Object: (pres_mg_levels_3_) 4 MPI processes
>> type: sor
>> SOR: type = local_symmetric, iterations = 1, local iterations =
>> 1, omega = 1
>> linear system matrix = precond matrix:
>> Matrix Object: 4 MPI processes
>> type: mpiaij
>> rows=6617, cols=6617
>> total: nonzeros=88653, allocated nonzeros=88653
>> total number of mallocs used during MatSetValues calls =0
>> not using I-node (on process 0) routines
>> Up solver (post-smoother) same as down solver (pre-smoother)
>> Down solver (pre-smoother) on level 4 -------------------------------
>> KSP Object: (pres_mg_levels_4_) 4 MPI processes
>> type: richardson
>> Richardson: damping factor=1
>> maximum iterations=1
>> tolerances: relative=1e-05, absolute=1e-50, divergence=10000
>> left preconditioning
>> has attached null space
>> using nonzero initial guess
>> using PRECONDITIONED norm type for convergence test
>> PC Object: (pres_mg_levels_4_) 4 MPI processes
>> type: sor
>> SOR: type = local_symmetric, iterations = 1, local iterations =
>> 1, omega = 1
>> linear system matrix = precond matrix:
>> Matrix Object: 4 MPI processes
>> type: mpiaij
>> rows=46330, cols=46330
>> total: nonzeros=322437, allocated nonzeros=615417
>> total number of mallocs used during MatSetValues calls =0
>> not using I-node (on process 0) routines
>> Up solver (post-smoother) same as down solver (pre-smoother)
>> linear system matrix = precond matrix:
>> Matrix Object: 4 MPI processes
>> type: mpiaij
>> rows=46330, cols=46330
>> total: nonzeros=322437, allocated nonzeros=615417
>> total number of mallocs used during MatSetValues calls =0
>> not using I-node (on process 0) routines
>>
>>
>>
>> John
>>
>>
>>
>> On Tue, Mar 20, 2012 at 1:33 PM, Mark F. Adams <mark.adams at columbia.edu>wrote:
>>
>>> John,
>>>
>>> I am getting poor results (diverging) from ML also:
>>>
>>> 0 KSP preconditioned resid norm 3.699832960909e+22 true resid norm
>>> 1.310674055116e+03 ||r(i)||/||b|| 1.000000000000e+00
>>> 2 KSP preconditioned resid norm 5.706378365783e+11 true resid norm
>>> 1.563902233018e+03 ||r(i)||/||b|| 1.193204539995e+00
>>> 4 KSP preconditioned resid norm 5.570291685152e+11 true resid norm
>>> 1.564235542744e+03 ||r(i)||/||b|| 1.193458844050e+00
>>> 6 KSP preconditioned resid norm 5.202150407298e+10 true resid norm
>>> 1.749929789082e+03 ||r(i)||/||b|| 1.335137277077e+00
>>> Linear solve converged due to CONVERGED_RTOL iterations 6
>>>
>>> With GAMG I get:
>>>
>>> 0 KSP preconditioned resid norm 7.731260075891e+06 true resid norm
>>> 1.310674055116e+03 ||r(i)||/||b|| 1.000000000000e+00
>>> 2 KSP preconditioned resid norm 2.856415184685e+05 true resid norm
>>> 1.310410242531e+03 ||r(i)||/||b|| 9.997987199150e-01
>>> 4 KSP preconditioned resid norm 1.528467019258e+05 true resid norm
>>> 1.284856538976e+03 ||r(i)||/||b|| 9.803021078816e-01
>>> 6 KSP preconditioned resid norm 1.451091957899e+05 true resid norm
>>> 1.564309254168e+03 ||r(i)||/||b|| 1.193515083375e+00
>>>
>>> <snip>
>>>
>>> 122 KSP preconditioned resid norm 2.486245341783e+01 true resid norm
>>> 1.404397185367e+00 ||r(i)||/||b|| 1.071507580306e-03
>>> 124 KSP preconditioned resid norm 1.482316853621e+01 true resid norm
>>> 4.488661881759e-01 ||r(i)||/||b|| 3.424697287811e-04
>>> 126 KSP preconditioned resid norm 1.481941150253e+01 true resid norm
>>> 4.484480100832e-01 ||r(i)||/||b|| 3.421506730318e-04
>>> 128 KSP preconditioned resid norm 8.191887347033e+00 true resid norm
>>> 6.678630367218e-01 ||r(i)||/||b|| 5.095569215816e-04
>>>
>>> And HYPRE:
>>>
>>> 0 KSP preconditioned resid norm 3.774510769907e+04 true resid norm
>>> 1.310674055116e+03 ||r(i)||/||b|| 1.000000000000e+00
>>> 2 KSP preconditioned resid norm 1.843165835831e+04 true resid norm
>>> 8.502433792869e+02 ||r(i)||/||b|| 6.487069580482e-01
>>> 4 KSP preconditioned resid norm 1.573176624705e+04 true resid norm
>>> 1.167264367302e+03 ||r(i)||/||b|| 8.905832558033e-01
>>> 6 KSP preconditioned resid norm 1.657958380765e+04 true resid norm
>>> 8.684701624902e+02 ||r(i)||/||b|| 6.626133775216e-01
>>> 8 KSP preconditioned resid norm 2.190304455083e+04 true resid norm
>>> 6.969893263600e+02 ||r(i)||/||b|| 5.317792960344e-01
>>> 10 KSP preconditioned resid norm 2.485714630000e+04 true resid norm
>>> 6.642641436830e+02 ||r(i)||/||b|| 5.068110878446e-01
>>>
>>> <snip>
>>>
>>> 62 KSP preconditioned resid norm 6.432516040957e+00 true resid norm
>>> 2.124960171419e-01 ||r(i)||/||b|| 1.621272781837e-04
>>> 64 KSP preconditioned resid norm 5.731033176541e+00 true resid norm
>>> 1.338816774003e-01 ||r(i)||/||b|| 1.021471943216e-04
>>> 66 KSP preconditioned resid norm 1.600285935522e-01 true resid norm
>>> 3.352408932031e-03 ||r(i)||/||b|| 2.557774695353e-06
>>>
>>> ML and GAMG should act similarly, but ML seems to have a problem (see
>>> the preconditioned norm difference and its diverging). ML has a parameter:
>>>
>>> -pc_ml_Threshold [.0]
>>>
>>> I setting this to 0.05 (GAMG default) helps a bit but it still diverges.
>>>
>>> So it would be nice to figure out the difference between ML and GAMG,
>>> but that is secondary for you as the both suck.
>>>
>>> HYPRE is a very different algorithm. It looks like the smoothing in
>>> GAMG (and ML) may be the problem. If I turn smoothing off
>>> (-pc_gamg_agg_nsmooths 0) and I get for GAMG:
>>>
>>> 0 KSP preconditioned resid norm 2.186148437534e+05 true resid norm
>>> 1.310674055116e+03 ||r(i)||/||b|| 1.000000000000e+00
>>> 2 KSP preconditioned resid norm 2.916843959765e+04 true resid norm
>>> 3.221533667508e+03 ||r(i)||/||b|| 2.457921292432e+00
>>> 4 KSP preconditioned resid norm 2.396374655925e+04 true resid norm
>>> 1.834299897412e+03 ||r(i)||/||b|| 1.399508817812e+00
>>> 6 KSP preconditioned resid norm 2.509576275453e+04 true resid norm
>>> 1.035475461174e+03 ||r(i)||/||b|| 7.900327752214e-01
>>>
>>> <snip>
>>>
>>> 64 KSP preconditioned resid norm 1.973859758284e+01 true resid norm
>>> 7.322674977169e+00 ||r(i)||/||b|| 5.586953482895e-03
>>> 66 KSP preconditioned resid norm 3.371598890438e+01 true resid norm
>>> 7.152754930495e+00 ||r(i)||/||b|| 5.457310231004e-03
>>> 68 KSP preconditioned resid norm 4.687839294418e+00 true resid norm
>>> 4.605726307025e-01 ||r(i)||/||b|| 3.514013487219e-04
>>> 70 KSP preconditioned resid norm 1.487545519493e+00 true resid norm
>>> 1.558723789416e-01 ||r(i)||/||b|| 1.189253562571e-04
>>> 72 KSP preconditioned resid norm 5.317329808718e-01 true resid norm
>>> 5.027178038177e-02 ||r(i)||/||b|| 3.835566911967e-05
>>> 74 KSP preconditioned resid norm 3.405339702462e-01 true resid norm
>>> 1.897059263835e-02 ||r(i)||/||b|| 1.447392092969e-05
>>>
>>> This is almost as good as HYPRE.
>>>
>>> An other thing to keep in mind is the cost of each iteration, not just
>>> then number of iterations, You can
>>> use -pc_hypre_boomeramg_print_statistics to get some data on this from
>>> HYPRE:
>>>
>>> Average Convergence Factor = 0.537664
>>>
>>> Complexity: grid = 1.780207
>>> operator = 2.624910
>>> cycle = 5.249670
>>>
>>> And GAMG prints this with verbose set:
>>>
>>> [0]PCSetUp_GAMG 6 levels, grid compexity [sic] = 1.1316
>>>
>>> I believe that the hypre "Complexity: grid" is the same as my "grid
>>> complexity". So hypre actually looks more expensive at this point.
>>>
>>> I've worked on optimizing parameters for hypre with the hypre people and
>>> here are a set of arguments that I've used:
>>>
>>> -pc_hypre_boomeramg_no_CF -pc_hypre_boomeramg_agg_nl 1
>>> -pc_hypre_boomeramg_coarsen_type HMIS -pc_hypre_boomeramg_interp_type ext+i
>>> -pc_hypre_boomeramg_P_max 4 -pc_hypre_boomeramg_agg_num_paths 2
>>>
>>> With these parmeters I get:
>>>
>>> Complexity: grid = 1.244140
>>> operator = 1.396722
>>> cycle = 2.793442
>>>
>>> and:
>>>
>>> 0 KSP preconditioned resid norm 4.698624821403e+04 true resid norm
>>> 1.310674055116e+03 ||r(i)||/||b|| 1.000000000000e+00
>>> 2 KSP preconditioned resid norm 2.207967626172e+04 true resid norm
>>> 3.466160021150e+03 ||r(i)||/||b|| 2.644562931280e+00
>>> 4 KSP preconditioned resid norm 2.278468320876e+04 true resid norm
>>> 1.246784122467e+03 ||r(i)||/||b|| 9.512541410282e-01
>>>
>>> <snip>
>>>
>>> 56 KSP preconditioned resid norm 1.108460232262e+00 true resid norm
>>> 8.276869475681e-02 ||r(i)||/||b|| 6.314971631105e-05
>>> 58 KSP preconditioned resid norm 3.617217454336e-01 true resid norm
>>> 3.764556404754e-02 ||r(i)||/||b|| 2.872229285428e-05
>>> 60 KSP preconditioned resid norm 1.666532560770e-01 true resid norm
>>> 5.149302513338e-03 ||r(i)||/||b|| 3.928743758404e-06
>>> Linear solve converged due to CONVERGED_RTOL iterations 60
>>>
>>> So this actually converged faster with lower complexity.
>>>
>>> Anyway these result seem different from what you are getting, so I've
>>> appended my options. This uses ex10 in the KSP tutorials to read in your
>>> binary file.
>>>
>>> Mark
>>>
>>> #PETSc Option Table entries:
>>> -f ./binaryoutput
>>> -ksp_converged_reason
>>> -ksp_diagonal_scale
>>> -ksp_diagonal_scale_fix
>>> -ksp_monitor_true_residual
>>> -ksp_type bcgsl
>>> -mg_coarse_ksp_type richardson
>>> -mg_coarse_pc_sor_its 8
>>> -mg_coarse_pc_type sor
>>> -mg_levels_ksp_type richardson
>>> -mg_levels_pc_type sor
>>> -options_left
>>> -pc_gamg_agg_nsmooths 0
>>> -pc_gamg_coarse_eq_limit 10
>>> -pc_gamg_sym_graph
>>> -pc_gamg_verbose 2
>>> -pc_hypre_boomeramg_P_max 4
>>> -pc_hypre_boomeramg_agg_nl 1
>>> -pc_hypre_boomeramg_agg_num_paths 2
>>> -pc_hypre_boomeramg_coarsen_type HMIS
>>> -pc_hypre_boomeramg_interp_type ext+i
>>> -pc_hypre_boomeramg_no_CF
>>> -pc_ml_Threshold .01
>>> -pc_type gamg
>>> -vecload_block_size 1
>>> #End of PETSc Option Table entries
>>> There are 7 unused database options. They are:
>>> Option left: name:-pc_hypre_boomeramg_P_max value: 4
>>> Option left: name:-pc_hypre_boomeramg_agg_nl value: 1
>>> Option left: name:-pc_hypre_boomeramg_agg_num_paths value: 2
>>> Option left: name:-pc_hypre_boomeramg_coarsen_type value: HMIS
>>> Option left: name:-pc_hypre_boomeramg_interp_type value: ext+i
>>> Option left: name:-pc_hypre_boomeramg_no_CF no value
>>> Option left: name:-pc_ml_Threshold value: .01
>>>
>>>
>>> On Mar 20, 2012, at 10:19 AM, John Mousel wrote:
>>>
>>> Mark,
>>>
>>> Sorry for the late reply. I've been on travel and hadn't had a chance to
>>> pick this back up. I've tried running with the suggested options:
>>>
>>> -ksp_type bcgsl -pc_type gamg -pc_gamg_coarse_eq_limit 10
>>> -pc_gamg_agg_nsmooths 1 -pc_gamg_sym_graph -mg_coarse_ksp_type richardson
>>> -mg_coarse_pc_type sor -mg_coarse_pc_sor_its 8 -ksp_diagonal_scale
>>> -ksp_diagonal_scale_fix -ksp_monitor_true_residual -ksp_view
>>> -pc_gamg_verbose 1
>>>
>>> With these options, the convergence starts to hang (see attached
>>> GAMG_kspview.txt). The hanging happens for both -mg_coarse_ksp_type
>>> richardson and preonly. It was my understanding from previous emails that
>>> using preonly made it so that only the preconditioner was run, which in
>>> this case would be 8 sweeps of SOR. If I get rid of the
>>> -pc_gamg_agg_nsmooths 1 (see GAMG_kspview_nosmooth.txt), the problem
>>> converges, but again the convergence is slow. Without this option, both
>>> Richardson and preonly converge in 172 iterations.
>>>
>>> Matt, I've checked and the problem does converge in the true residual
>>> using GAMG, ML, HYPRE, and ILU preconditioned BiCG. I explicitly ensure
>>> that a solution exists by projecting the rhs vector out of the nullity of
>>> the transpose of operator.
>>>
>>> John
>>>
>>>
>>> On Fri, Mar 16, 2012 at 2:04 PM, Mark F. Adams <mark.adams at columbia.edu>wrote:
>>>
>>>> John, did this get resolved?
>>>> Mark
>>>>
>>>> On Mar 15, 2012, at 4:24 PM, John Mousel wrote:
>>>>
>>>> Mark,
>>>>
>>>> Running without the options you mentioned before leads to slightly
>>>> worse performance (175 iterations).
>>>> I have not been able to get run coarse grid solve to work with LU while
>>>> running ML. It keeps experiencing a zero pivot, and all the combinations of
>>>> shifting i've tried haven't lead me anywhere, hence the SOR on the course
>>>> grid. Also, the ML manual suggests limiting the number of levels to 3 or 4
>>>> and performing a few sweeps of an iterative method as opposed to a direct
>>>> solve.
>>>>
>>>> John
>>>>
>>>> On Thu, Mar 15, 2012 at 12:04 PM, Mark F. Adams <
>>>> mark.adams at columbia.edu> wrote:
>>>>
>>>>> You also want: -pc_gamg_agg_nsmooths 1
>>>>>
>>>>> You are running plain aggregation. If it is Poisson then smoothing is
>>>>> good.
>>>>>
>>>>> Is this problem singular? Can you try running ML with these
>>>>> parameters and see if its performance degrades? The ML implementation uses
>>>>> the PETSC infrastructure and uses a very similar algorithm to GAMG-SA. We
>>>>> should be able to get these two to match pretty well.
>>>>>
>>>>> Mark
>>>>>
>>>>>
>>>>> On Mar 15, 2012, at 12:21 PM, John Mousel wrote:
>>>>>
>>>>> Mark,
>>>>>
>>>>> I ran with those options removed (see the run options listed below).
>>>>> Things actually got slightly worse. Now it's up to 142 iterations. I have
>>>>> attached the ksp_view output.
>>>>>
>>>>> -ksp_type bcgsl -pc_type gamg -pc_gamg_sym_graph -ksp_diagonal_scale
>>>>> -ksp_diagonal_scale_fix -mg_levels_ksp_type richardson -mg_levels_pc_type
>>>>> sor -pc_gamg_verbose 1
>>>>>
>>>>>
>>>>> John
>>>>>
>>>>>
>>>>> On Thu, Mar 15, 2012 at 10:55 AM, Mark F. Adams <
>>>>> mark.adams at columbia.edu> wrote:
>>>>>
>>>>>> John, can you run again with: -pc_gamg_verbose 1
>>>>>>
>>>>>> And I would not use: -pc_mg_levels 4 -mg_coarse_ksp_type preonly
>>>>>> -mg_coarse_pc_type sor -mg_coarse_pc_sor_its 8
>>>>>>
>>>>>> 1) I think -mg_coarse_ksp_type preonly and -mg_coarse_pc_sor_its 8 do
>>>>>> not do what you think. I think this is the same as 1 iteration. I think
>>>>>> you want 'richardson' not 'preonly'.
>>>>>>
>>>>>> 2) Why are you using sor as the coarse solver? If your problem is
>>>>>> singular then you want to use as many levels as possible to get the coarse
>>>>>> grid to be tiny. I'm pretty sure HYPRE ignores the coarse solver
>>>>>> parameters. But ML uses them and it is converging well.
>>>>>>
>>>>>> 3) I would not specify the number of levels. GAMG, and I think the
>>>>>> rest, have internal logic for stopping a the right level. If the coarse
>>>>>> level is large and you use just 8 iterations of sor then convergence will
>>>>>> suffer.
>>>>>>
>>>>>> Mark
>>>>>>
>>>>>> On Mar 15, 2012, at 11:13 AM, John Mousel wrote:
>>>>>>
>>>>>> Mark,
>>>>>>
>>>>>> The changes pulled through this morning. I've run it with the options
>>>>>>
>>>>>> -ksp_type bcgsl -pc_type gamg -pc_gamg_sym_graph -ksp_diagonal_scale
>>>>>> -ksp_diagonal_scale_fix -pc_mg_levels 4 -mg_levels_ksp_type richardson
>>>>>> -mg_levels_pc_type sor -mg_coarse_ksp_type preonly -mg_coarse_pc_type sor
>>>>>> -mg_coarse_pc_sor_its 8
>>>>>>
>>>>>> and it converges in the true residual, but it's not converging as
>>>>>> fast as anticpated. The matrix arises from a non-symmetric discretization
>>>>>> of the Poisson equation. The solve takes GAMG 114 iterations, whereas ML
>>>>>> takes 24 iterations, BoomerAMG takes 22 iterations, and -ksp_type bcgsl
>>>>>> -pc_type bjacobi -sub_pc_type ilu -sub_pc_factor_levels 4 takes around 170.
>>>>>> I've attached the -ksp_view results for ML,GAMG, and HYPRE. I've attempted
>>>>>> to make all the options the same on all levels for ML and GAMG.
>>>>>>
>>>>>> Any thoughts?
>>>>>>
>>>>>> John
>>>>>>
>>>>>>
>>>>>> On Wed, Mar 14, 2012 at 6:04 PM, Mark F. Adams <
>>>>>> mark.adams at columbia.edu> wrote:
>>>>>>
>>>>>>> Humm, I see it with hg view (appended).
>>>>>>>
>>>>>>> Satish, my main repo looks hosed. I see this:
>>>>>>>
>>>>>>> ~/Codes/petsc-dev>hg update
>>>>>>> abort: crosses branches (merge branches or use --clean to discard
>>>>>>> changes)
>>>>>>> ~/Codes/petsc-dev>hg merge
>>>>>>> abort: branch 'default' has 3 heads - please merge with an explicit
>>>>>>> rev
>>>>>>> (run 'hg heads .' to see heads)
>>>>>>> ~/Codes/petsc-dev>hg heads
>>>>>>> changeset: 22496:8e2a98268179
>>>>>>> tag: tip
>>>>>>> user: Barry Smith <bsmith at mcs.anl.gov>
>>>>>>> date: Wed Mar 14 16:42:25 2012 -0500
>>>>>>> files: src/vec/is/interface/f90-custom/zindexf90.c
>>>>>>> src/vec/vec/interface/f90-custom/zvectorf90.c
>>>>>>> description:
>>>>>>> undoing manually changes I put in because Satish had a better fix
>>>>>>>
>>>>>>>
>>>>>>> changeset: 22492:bda4df63072d
>>>>>>> user: Mark F. Adams <mark.adams at columbia.edu>
>>>>>>> date: Wed Mar 14 17:39:52 2012 -0400
>>>>>>> files: src/ksp/pc/impls/gamg/tools.c
>>>>>>> description:
>>>>>>> fix for unsymmetric matrices.
>>>>>>>
>>>>>>>
>>>>>>> changeset: 22469:b063baf366e4
>>>>>>> user: Mark F. Adams <mark.adams at columbia.edu>
>>>>>>> date: Wed Mar 14 14:22:28 2012 -0400
>>>>>>> files: src/ksp/pc/impls/gamg/tools.c
>>>>>>> description:
>>>>>>> added fix for preallocation for unsymetric matrices.
>>>>>>>
>>>>>>> Mark
>>>>>>>
>>>>>>> my 'hg view' on my merge repo:
>>>>>>>
>>>>>>> Revision: 22492
>>>>>>> Branch: default
>>>>>>> Author: Mark F. Adams <mark.adams at columbia.edu> 2012-03-14 17:39:52
>>>>>>> Committer: Mark F. Adams <mark.adams at columbia.edu> 2012-03-14
>>>>>>> 17:39:52
>>>>>>> Tags: tip
>>>>>>> Parent: 22491:451bbbd291c2 (Small fixes to the BT linesearch)
>>>>>>>
>>>>>>> fix for unsymmetric matrices.
>>>>>>>
>>>>>>>
>>>>>>> ------------------------ src/ksp/pc/impls/gamg/tools.c
>>>>>>> ------------------------
>>>>>>> @@ -103,7 +103,7 @@
>>>>>>> PetscErrorCode ierr;
>>>>>>> PetscInt Istart,Iend,Ii,jj,ncols,nnz0,nnz1, NN, MM, nloc;
>>>>>>> PetscMPIInt mype, npe;
>>>>>>> - Mat Gmat = *a_Gmat, tGmat;
>>>>>>> + Mat Gmat = *a_Gmat, tGmat, matTrans;
>>>>>>> MPI_Comm wcomm = ((PetscObject)Gmat)->comm;
>>>>>>> const PetscScalar *vals;
>>>>>>> const PetscInt *idx;
>>>>>>> @@ -127,6 +127,10 @@
>>>>>>> ierr = MatDiagonalScale( Gmat, diag, diag ); CHKERRQ(ierr);
>>>>>>> ierr = VecDestroy( &diag ); CHKERRQ(ierr);
>>>>>>>
>>>>>>> + if( symm ) {
>>>>>>> + ierr = MatTranspose( Gmat, MAT_INITIAL_MATRIX, &matTrans );
>>>>>>> CHKERRQ(ierr);
>>>>>>> + }
>>>>>>> +
>>>>>>> /* filter - dup zeros out matrix */
>>>>>>> ierr = PetscMalloc( nloc*sizeof(PetscInt), &d_nnz );
>>>>>>> CHKERRQ(ierr);
>>>>>>> ierr = PetscMalloc( nloc*sizeof(PetscInt), &o_nnz );
>>>>>>> CHKERRQ(ierr);
>>>>>>> @@ -135,6 +139,12 @@
>>>>>>> d_nnz[jj] = ncols;
>>>>>>> o_nnz[jj] = ncols;
>>>>>>> ierr = MatRestoreRow(Gmat,Ii,&ncols,PETSC_NULL,PETSC_NULL);
>>>>>>> CHKERRQ(ierr);
>>>>>>> + if( symm ) {
>>>>>>> + ierr = MatGetRow(matTrans,Ii,&ncols,PETSC_NULL,PETSC_NULL);
>>>>>>> CHKERRQ(ierr);
>>>>>>> + d_nnz[jj] += ncols;
>>>>>>> + o_nnz[jj] += ncols;
>>>>>>> + ierr =
>>>>>>> MatRestoreRow(matTrans,Ii,&ncols,PETSC_NULL,PETSC_NULL); CHKERRQ(ierr);
>>>>>>> + }
>>>>>>> if( d_nnz[jj] > nloc ) d_nnz[jj] = nloc;
>>>>>>> if( o_nnz[jj] > (MM-nloc) ) o_nnz[jj] = MM - nloc;
>>>>>>> }
>>>>>>> @@ -142,6 +152,9 @@
>>>>>>> CHKERRQ(ierr);
>>>>>>> ierr = PetscFree( d_nnz ); CHKERRQ(ierr);
>>>>>>> ierr = PetscFree( o_nnz ); CHKERRQ(ierr);
>>>>>>> + if( symm ) {
>>>>>>> + ierr = MatDestroy( &matTrans ); CHKERRQ(ierr);
>>>>>>> + }
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Mar 14, 2012, at 5:53 PM, John Mousel wrote:
>>>>>>>
>>>>>>> Mark,
>>>>>>>
>>>>>>> No change. Can you give me the location that you patched so I can
>>>>>>> check to make sure it pulled?
>>>>>>> I don't see it on the petsc-dev change log.
>>>>>>>
>>>>>>> John
>>>>>>>
>>>>>>> On Wed, Mar 14, 2012 at 4:40 PM, Mark F. Adams <
>>>>>>> mark.adams at columbia.edu> wrote:
>>>>>>>
>>>>>>>> John, I've committed these changes, give a try.
>>>>>>>>
>>>>>>>> Mark
>>>>>>>>
>>>>>>>> On Mar 14, 2012, at 3:46 PM, Satish Balay wrote:
>>>>>>>>
>>>>>>>> > This is the usual merge [with uncommited changes] issue.
>>>>>>>> >
>>>>>>>> > You could use 'hg shelf' extension to shelve your local changes
>>>>>>>> and
>>>>>>>> > then do a merge [as Sean would suggest] - or do the merge in a
>>>>>>>> > separate/clean clone [I normally do this..]
>>>>>>>> >
>>>>>>>> > i.e
>>>>>>>> > cd ~/Codes
>>>>>>>> > hg clone petsc-dev petsc-dev-merge
>>>>>>>> > cd petsc-dev-merge
>>>>>>>> > hg pull ssh://petsc@petsc.cs.iit.edu//hg/petsc/petsc-dev #just
>>>>>>>> to be sure, look for latest chagnes before merge..
>>>>>>>> > hg merge
>>>>>>>> > hg commit
>>>>>>>> > hg push ssh://petsc@petsc.cs.iit.edu//hg/petsc/petsc-dev
>>>>>>>> >
>>>>>>>> > [now update your petsc-dev to latest]
>>>>>>>> > cd ~/Codes/petsc-dev
>>>>>>>> > hg pull
>>>>>>>> > hg update
>>>>>>>> >
>>>>>>>> > Satish
>>>>>>>> >
>>>>>>>> > On Wed, 14 Mar 2012, Mark F. Adams wrote:
>>>>>>>> >
>>>>>>>> >> Great, that seems to work.
>>>>>>>> >>
>>>>>>>> >> I did a 'hg commit tools.c'
>>>>>>>> >>
>>>>>>>> >> and I want to push this file only. I guess its the only thing
>>>>>>>> in the change set so 'hg push' should be fine. But I see this:
>>>>>>>> >>
>>>>>>>> >> ~/Codes/petsc-dev/src/ksp/pc/impls/gamg>hg update
>>>>>>>> >> abort: crosses branches (merge branches or use --clean to
>>>>>>>> discard changes)
>>>>>>>> >> ~/Codes/petsc-dev/src/ksp/pc/impls/gamg>hg merge
>>>>>>>> >> abort: outstanding uncommitted changes (use 'hg status' to list
>>>>>>>> changes)
>>>>>>>> >> ~/Codes/petsc-dev/src/ksp/pc/impls/gamg>hg status
>>>>>>>> >> M include/petscmat.h
>>>>>>>> >> M include/private/matimpl.h
>>>>>>>> >> M src/ksp/pc/impls/gamg/agg.c
>>>>>>>> >> M src/ksp/pc/impls/gamg/gamg.c
>>>>>>>> >> M src/ksp/pc/impls/gamg/gamg.h
>>>>>>>> >> M src/ksp/pc/impls/gamg/geo.c
>>>>>>>> >> M src/mat/coarsen/coarsen.c
>>>>>>>> >> M src/mat/coarsen/impls/hem/hem.c
>>>>>>>> >> M src/mat/coarsen/impls/mis/mis.c
>>>>>>>> >>
>>>>>>>> >> Am I ready to do a push?
>>>>>>>> >>
>>>>>>>> >> Thanks,
>>>>>>>> >> Mark
>>>>>>>> >>
>>>>>>>> >> On Mar 14, 2012, at 2:44 PM, Satish Balay wrote:
>>>>>>>> >>
>>>>>>>> >>> If commit is the last hg operation that you've done - then 'hg
>>>>>>>> rollback' would undo this commit.
>>>>>>>> >>>
>>>>>>>> >>> Satish
>>>>>>>> >>>
>>>>>>>> >>> On Wed, 14 Mar 2012, Mark F. Adams wrote:
>>>>>>>> >>>
>>>>>>>> >>>> Damn, I'm not preallocating the graph perfectly for
>>>>>>>> unsymmetric matrices and PETSc now dies on this.
>>>>>>>> >>>>
>>>>>>>> >>>> I have a fix but I committed it with other changes that I do
>>>>>>>> not want to commit. The changes are all in one file so I should be able to
>>>>>>>> just commit this file.
>>>>>>>> >>>>
>>>>>>>> >>>> Anyone know how to delete a commit?
>>>>>>>> >>>>
>>>>>>>> >>>> I've tried:
>>>>>>>> >>>>
>>>>>>>> >>>> ~/Codes/petsc-dev/src/ksp/pc/impls/gamg>hg strip
>>>>>>>> 22487:26ffb9eef17f
>>>>>>>> >>>> hg: unknown command 'strip'
>>>>>>>> >>>> 'strip' is provided by the following extension:
>>>>>>>> >>>>
>>>>>>>> >>>> mq manage a stack of patches
>>>>>>>> >>>>
>>>>>>>> >>>> use "hg help extensions" for information on enabling extensions
>>>>>>>> >>>>
>>>>>>>> >>>> But have not figured out how to load extensions.
>>>>>>>> >>>>
>>>>>>>> >>>> Mark
>>>>>>>> >>>>
>>>>>>>> >>>> On Mar 14, 2012, at 12:54 PM, John Mousel wrote:
>>>>>>>> >>>>
>>>>>>>> >>>>> Mark,
>>>>>>>> >>>>>
>>>>>>>> >>>>> I have a non-symmetric matrix. I am running with the
>>>>>>>> following options.
>>>>>>>> >>>>>
>>>>>>>> >>>>> -pc_type gamg -pc_gamg_sym_graph -ksp_monitor_true_residual
>>>>>>>> >>>>>
>>>>>>>> >>>>> and with the inclusion of -pc_gamg_sym_graph, I get a new
>>>>>>>> malloc error:
>>>>>>>> >>>>>
>>>>>>>> >>>>>
>>>>>>>> >>>>> 0]PETSC ERROR: --------------------- Error Message
>>>>>>>> ------------------------------------
>>>>>>>> >>>>> [0]PETSC ERROR: Argument out of range!
>>>>>>>> >>>>> [0]PETSC ERROR: New nonzero at (5150,9319) caused a malloc!
>>>>>>>> >>>>> [0]PETSC ERROR:
>>>>>>>> ------------------------------------------------------------------------
>>>>>>>> >>>>> [0]PETSC ERROR: Petsc Development HG revision:
>>>>>>>> 587b25035091aaa309c87c90ac64c13408ecf34e HG Date: Wed Mar 14 09:22:54 2012
>>>>>>>> -0500
>>>>>>>> >>>>> [0]PETSC ERROR: See docs/changes/index.html for recent
>>>>>>>> updates.
>>>>>>>> >>>>> [0]PETSC ERROR: See docs/faq.html for hints about trouble
>>>>>>>> shooting.
>>>>>>>> >>>>> [0]PETSC ERROR: See docs/index.html for manual pages.
>>>>>>>> >>>>> [0]PETSC ERROR:
>>>>>>>> ------------------------------------------------------------------------
>>>>>>>> >>>>> [0]PETSC ERROR: ../JohnRepo/VFOLD_exe on a linux-deb named
>>>>>>>> wv.iihr.uiowa.edu by jmousel Wed Mar 14 11:51:35 2012
>>>>>>>> >>>>> [0]PETSC ERROR: Libraries linked from
>>>>>>>> /home/jmousel/NumericalLibraries/petsc-hg/petsc-dev/linux-debug/lib
>>>>>>>> >>>>> [0]PETSC ERROR: Configure run at Wed Mar 14 09:46:39 2012
>>>>>>>> >>>>> [0]PETSC ERROR: Configure options --download-blacs=1
>>>>>>>> --download-hypre=1 --download-metis=1 --download-ml=1 --download-mpich=1
>>>>>>>> --download-parmetis=1 --download-scalapack=1
>>>>>>>> --with-blas-lapack-dir=/opt/intel11/mkl/lib/em64t --with-cc=gcc
>>>>>>>> --with-cmake=/usr/local/bin/cmake --with-cxx=g++ --with-fc=ifort
>>>>>>>> PETSC_ARCH=linux-debug
>>>>>>>> >>>>> [0]PETSC ERROR:
>>>>>>>> ------------------------------------------------------------------------
>>>>>>>> >>>>> [0]PETSC ERROR: MatSetValues_MPIAIJ() line 506 in
>>>>>>>> /home/jmousel/NumericalLibraries/petsc-hg/petsc-dev/src/mat/impls/aij/mpi/mpiaij.c
>>>>>>>> >>>>> [0]PETSC ERROR: MatSetValues() line 1141 in
>>>>>>>> /home/jmousel/NumericalLibraries/petsc-hg/petsc-dev/src/mat/interface/matrix.c
>>>>>>>> >>>>> [0]PETSC ERROR: scaleFilterGraph() line 155 in
>>>>>>>> /home/jmousel/NumericalLibraries/petsc-hg/petsc-dev/src/ksp/pc/impls/gamg/tools.c
>>>>>>>> >>>>> [0]PETSC ERROR: PCGAMGgraph_AGG() line 865 in
>>>>>>>> /home/jmousel/NumericalLibraries/petsc-hg/petsc-dev/src/ksp/pc/impls/gamg/agg.c
>>>>>>>> >>>>> [0]PETSC ERROR: PCSetUp_GAMG() line 516 in
>>>>>>>> /home/jmousel/NumericalLibraries/petsc-hg/petsc-dev/src/ksp/pc/impls/gamg/gamg.c
>>>>>>>> >>>>> [0]PETSC ERROR: PCSetUp() line 832 in
>>>>>>>> /home/jmousel/NumericalLibraries/petsc-hg/petsc-dev/src/ksp/pc/interface/precon.c
>>>>>>>> >>>>> [0]PETSC ERROR: KSPSetUp() line 261 in
>>>>>>>> /home/jmousel/NumericalLibraries/petsc-hg/petsc-dev/src/ksp/ksp/interface/itfunc.c
>>>>>>>> >>>>> [0]PETSC ERROR: KSPSolve() line 385 in
>>>>>>>> /home/jmousel/NumericalLibraries/petsc-hg/petsc-dev/src/ksp/ksp/interface/itfunc.c
>>>>>>>> >>>>>
>>>>>>>> >>>>>
>>>>>>>> >>>>> John
>>>>>>>> >>>>>
>>>>>>>> >>>>>
>>>>>>>> >>>>> On Wed, Mar 14, 2012 at 11:27 AM, Mark F. Adams <
>>>>>>>> mark.adams at columbia.edu> wrote:
>>>>>>>> >>>>>
>>>>>>>> >>>>> On Mar 14, 2012, at 11:56 AM, John Mousel wrote:
>>>>>>>> >>>>>
>>>>>>>> >>>>>> Mark,
>>>>>>>> >>>>>>
>>>>>>>> >>>>>> The matrix is asymmetric. Does this require the setting of
>>>>>>>> an option?
>>>>>>>> >>>>>
>>>>>>>> >>>>> Yes: -pc_gamg_sym_graph
>>>>>>>> >>>>>
>>>>>>>> >>>>> Mark
>>>>>>>> >>>>>
>>>>>>>> >>>>>> I pulled petsc-dev this morning, so I should have (at least
>>>>>>>> close to) the latest code.
>>>>>>>> >>>>>>
>>>>>>>> >>>>>> John
>>>>>>>> >>>>>>
>>>>>>>> >>>>>> On Wed, Mar 14, 2012 at 10:54 AM, Mark F. Adams <
>>>>>>>> mark.adams at columbia.edu> wrote:
>>>>>>>> >>>>>>
>>>>>>>> >>>>>> On Mar 14, 2012, at 11:08 AM, John Mousel wrote:
>>>>>>>> >>>>>>
>>>>>>>> >>>>>>> I'm getting the following error when using GAMG.
>>>>>>>> >>>>>>>
>>>>>>>> >>>>>>> petsc-dev/src/ksp/pc/impls/gamg/agg.c:508: smoothAggs:
>>>>>>>> Assertion `sgid==-1' failed.
>>>>>>>> >>>>>>
>>>>>>>> >>>>>> Is it possible that your matrix is structurally asymmetric?
>>>>>>>> >>>>>>
>>>>>>>> >>>>>> This code is evolving fast and so you will need to move to
>>>>>>>> the dev version if you are not already using it. (I think I fixed a bug
>>>>>>>> that hit this assert).
>>>>>>>> >>>>>>
>>>>>>>> >>>>>>>
>>>>>>>> >>>>>>> When I try to alter the type of aggregation at the command
>>>>>>>> line using -pc_gamg_type pa, I'm getting
>>>>>>>> >>>>>>>
>>>>>>>> >>>>>>> [0]PETSC ERROR: [1]PETSC ERROR: --------------------- Error
>>>>>>>> Message ------------------------------------
>>>>>>>> >>>>>>> [1]PETSC ERROR: Unknown type. Check for miss-spelling or
>>>>>>>> missing external package needed for type:
>>>>>>>> >>>>>>> see
>>>>>>>> http://www.mcs.anl.gov/petsc/documentation/installation.html#external
>>>>>>>> !
>>>>>>>> >>>>>>> [1]PETSC ERROR: Unknown GAMG type pa given!
>>>>>>>> >>>>>>>
>>>>>>>> >>>>>>> Has there been a change in the aggregation options? I just
>>>>>>>> pulled petsc-dev this morning.
>>>>>>>> >>>>>>>
>>>>>>>> >>>>>>
>>>>>>>> >>>>>> Yes, this option is gone now. You can use -pc_gamg_type agg
>>>>>>>> for now.
>>>>>>>> >>>>>>
>>>>>>>> >>>>>> Mark
>>>>>>>> >>>>>>
>>>>>>>> >>>>>>> John
>>>>>>>> >>>>>>
>>>>>>>> >>>>>>
>>>>>>>> >>>>>
>>>>>>>> >>>>>
>>>>>>>> >>>>
>>>>>>>> >>>>
>>>>>>>> >>>
>>>>>>>> >>>
>>>>>>>> >>
>>>>>>>> >>
>>>>>>>> >
>>>>>>>> >
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>> <GAMG_kspview.txt><ML_kspview.txt><HYPRE_kspview.txt>
>>>>>>
>>>>>>
>>>>>>
>>>>> <GAMG_kspview.txt>
>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>> <GAMG_kspview.txt><GAMG_kspview_nosmooth.txt>
>>>
>>>
>>>
>>
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120329/9dd5ac1f/attachment-0001.htm>
More information about the petsc-users
mailing list