<html><head><meta http-equiv="Content-Type" content="text/html charset=us-ascii"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; ">Good point, this is indeed linear elasticity. Following your suggestion I first got the following error since I now use MATBAIJ:<div><br></div><div><div>[0]PCSetData_AGG bs=3 MM=30441</div><div>[0]PETSC ERROR: --------------------- Error Message ------------------------------------</div><div>[0]PETSC ERROR: Arguments are incompatible!</div><div>[0]PETSC ERROR: MatMatMult requires A, mpibaij, to be compatible with B, mpiaij!</div><div>[0]PETSC ERROR: ------------------------------------------------------------------------</div><div>[0]PETSC ERROR: Petsc Release Version 3.3.0, Patch 6, Mon Feb 11 12:26:34 CST 2013 </div><div>[0]PETSC ERROR: See docs/changes/index.html for recent updates.</div><div>[0]PETSC ERROR: See docs/faq.html for hints about trouble shooting.</div><div>[0]PETSC ERROR: See docs/index.html for manual pages.</div><div>[0]PETSC ERROR: ------------------------------------------------------------------------</div><div>[0]PETSC ERROR: /Users/hugo/Documents/jetstream/jetstream_x86_64 on a arch-darw named <a href="http://user204-27.wireless.utoronto.ca">user204-27.wireless.utoronto.ca</a> by hugo Mon Apr 15 17:31:55 2013</div><div>[0]PETSC ERROR: Libraries linked from /Users/hugo/Documents/petsc-3.3-p6/arch-darwin-c-opt/lib</div><div>[0]PETSC ERROR: Configure run at Mon Feb 18 15:08:07 2013</div><div>[0]PETSC ERROR: Configure options --with-debugging=0</div><div>[0]PETSC ERROR: ------------------------------------------------------------------------</div><div>[0]PETSC ERROR: MatMatMult() line 8617 in src/mat/interface/matrix.c</div><div>[0]PETSC ERROR: PCGAMGOptprol_AGG() line 1358 in src/ksp/pc/impls/gamg/agg.c</div><div>[0]PETSC ERROR: PCSetUp_GAMG() line 673 in src/ksp/pc/impls/gamg/gamg.c</div><div>[0]PETSC ERROR: PCSetUp() line 832 in src/ksp/pc/interface/precon.c</div><div>[0]PETSC ERROR: PCApply() line 380 in src/ksp/pc/interface/precon.c</div><div>[0]PETSC ERROR: KSPSolve_CG() line 139 in src/ksp/ksp/impls/cg/cg.c</div><div>[0]PETSC ERROR: KSPSolve() line 446 in src/ksp/ksp/interface/itfunc.c</div><div><br></div><div>I tried converting the matrix to MATAIJ with:</div><div><br></div><div>call MatConvert(Paa,MATAIJ,MAT_INITIAL_MATRIX,Paa2,Pierr)</div><div><br></div><div>and now I have this error (with ksp_view):</div><div><br></div><div><div>[0]PCSetData_AGG bs=1 MM=30441</div><div> KSPCG resid. tolerance target = 1.000E-10</div><div> KSPCG initial residual |res0| = 8.981E-02</div><div> KSPCG iter = 0: |res|/|res0| = 1.000E+00</div><div> KSPCG iter = 1: |res|/|res0| = 4.949E-01</div><div>KSP Object: 2 MPI processes</div><div> type: cg</div><div> maximum iterations=4000</div><div> tolerances: relative=1e-10, absolute=1e-50, divergence=10000</div><div> left preconditioning</div><div> using nonzero initial guess</div><div> using PRECONDITIONED norm type for convergence test</div><div>PC Object: 2 MPI processes</div><div> type: gamg</div><div> MG: type is MULTIPLICATIVE, levels=2 cycles=v</div><div> Cycles per PCApply=1</div><div> Using Galerkin computed coarse grid matrices</div><div> Coarse grid solver -- level -------------------------------</div><div> KSP Object: (mg_coarse_) 2 MPI processes</div><div> type: gmres</div><div> GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement</div><div> GMRES: happy breakdown tolerance 1e-30</div><div> maximum iterations=1, initial guess is zero</div><div> tolerances: relative=1e-05, absolute=1e-50, divergence=10000</div><div> left preconditioning</div><div> using NONE norm type for convergence test</div><div> PC Object: (mg_coarse_) 2 MPI processes</div><div> type: bjacobi</div><div> block Jacobi: number of blocks = 2</div><div> Local solve info for each block is in the following KSP and PC objects:</div><div> [0] number of local blocks = 1, first local block number = 0</div><div> [0] local block number 0</div><div> KSP Object: (mg_coarse_sub_) 1 MPI processes</div><div> type: preonly</div><div> KSP Object: (mg_coarse_sub_) maximum iterations=10000, initial guess is zero</div><div> tolerances: relative=1e-05, absolute=1e-50, divergence=10000</div><div> 1 MPI processes</div><div> type: preonly</div><div> left preconditioning</div><div> maximum iterations=10000, initial guess is zero</div><div> using NONE norm type for convergence test</div><div> PC Object: (mg_coarse_sub_) 1 MPI processes</div><div> tolerances: relative=1e-05, absolute=1e-50, divergence=10000</div><div> left preconditioning</div><div> using NONE norm type for convergence test</div><div> PC Object: (mg_coarse_sub_) type: lu</div><div> LU: out-of-place factorization</div><div> 1 MPI processes</div><div> type: lu</div><div> tolerance for zero pivot 2.22045e-14</div><div> LU: out-of-place factorization</div><div> matrix ordering: nd</div><div> factor fill ratio given 5, needed 5.06305</div><div> tolerance for zero pivot 2.22045e-14</div><div> Factored matrix follows:</div><div> matrix ordering: nd</div><div> factor fill ratio given 5, needed 0</div><div> Matrix Object: Factored matrix follows:</div><div> 1 MPI processes</div><div> type: seqaij</div><div>Matrix Object: 1 MPI processes</div><div> rows=552, cols=552</div><div> type: seqaij</div><div> package used to perform factorization: petsc</div><div> rows=0, cols=0</div><div> total: nonzeros=106962, allocated nonzeros=106962</div><div> package used to perform factorization: petsc</div><div> total number of mallocs used during MatSetValues calls =0</div><div> not using I-node routines</div><div> linear system matrix = precond matrix:</div><div> Matrix Object: 1 MPI processes</div><div> total: nonzeros=1, allocated nonzeros=1</div><div> total number of mallocs used during MatSetValues calls =0</div><div> not using I-node routines</div><div> linear system matrix = precond matrix:</div><div> type: seqaij</div><div> Matrix Object: rows=552, cols=552</div><div> 1 MPI processes</div><div> type: seqaij</div><div> total: nonzeros=21126, allocated nonzeros=21126</div><div> rows=0, cols=0</div><div> total number of mallocs used during MatSetValues calls =0</div><div> total: nonzeros=0, allocated nonzeros=0</div><div> not using I-node routines</div><div> total number of mallocs used during MatSetValues calls =0</div><div> - - - - - - - - - - - - - - - - - -</div><div> not using I-node routines</div><div> [1] number of local blocks = 1, first local block number = 1</div><div> [1] local block number 0</div><div> - - - - - - - - - - - - - - - - - -</div><div> linear system matrix = precond matrix:</div><div> Matrix Object: 2 MPI processes</div><div> type: mpiaij</div><div> rows=552, cols=552</div><div> total: nonzeros=21126, allocated nonzeros=21126</div><div> total number of mallocs used during MatSetValues calls =0</div><div> not using I-node (on process 0) routines</div><div> Down solver (pre-smoother) on level 1 -------------------------------</div><div> KSP Object: (mg_levels_1_) 2 MPI processes</div><div> type: chebyshev</div><div> Chebyshev: eigenvalue estimates: min = 0.0508405, max = 5.88746</div><div> maximum iterations=2</div><div> tolerances: relative=1e-05, absolute=1e-50, divergence=10000</div><div> left preconditioning</div><div> using nonzero initial guess</div><div> using NONE norm type for convergence test</div><div> PC Object: (mg_levels_1_) 2 MPI processes</div><div> type: jacobi</div><div> linear system matrix = precond matrix:</div><div> Matrix Object: 2 MPI processes</div><div> type: mpiaij</div><div> rows=60879, cols=60879</div><div> total: nonzeros=4509729, allocated nonzeros=4509729</div><div> total number of mallocs used during MatSetValues calls =0</div><div> using I-node (on process 0) routines: found 10147 nodes, limit used is 5</div><div> Up solver (post-smoother) same as down solver (pre-smoother)</div><div> linear system matrix = precond matrix:</div><div> Matrix Object: 2 MPI processes</div><div> type: mpiaij</div><div> rows=60879, cols=60879</div><div> total: nonzeros=4509729, allocated nonzeros=4509729</div><div> total number of mallocs used during MatSetValues calls =0</div><div> using I-node (on process 0) routines: found 10147 nodes, limit used is 5</div><div> Error in FEMesh_Mod::moveFEMeshPETSc() : KSP returned with error code = -8</div></div><div><br></div><div>
<div style="color: rgb(0, 0, 0); font-family: Helvetica; font-size: medium; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: 2; text-align: -webkit-auto; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><div>--</div><div> Hugo Gagnon</div></div>
</div>
<br><div><div>On 2013-04-15, at 5:26 PM, Mark F. Adams <<a href="mailto:mark.adams@columbia.edu">mark.adams@columbia.edu</a>> wrote:</div><br class="Apple-interchange-newline"><blockquote type="cite">Its probably not worth trying to verify the code with ILU(3) because the space of algorithms is large as Jed points out (e.g., ILU(3) does not fully define the solver unless they use the same node ordering, shifting strategies and whatever else your ILU is doing to make ILU not suck). It looks like you are doing 3D elasticity. Try<br><br>-pc_type gamg<br>-pc_gamg_agg_nsmooths 1<br><br>assuming you have v3.3 or higher.<br><br><br>On Apr 15, 2013, at 5:15 PM, Jed Brown <<a href="mailto:jedbrown@mcs.anl.gov">jedbrown@mcs.anl.gov</a>> wrote:<br><br><blockquote type="cite">Hugo Gagnon <<a href="mailto:opensource.petsc@user.fastmail.fm">opensource.petsc@user.fastmail.fm</a>> writes:<br><br><blockquote type="cite">For the problem I'm describing my serial in-house solver does not work<br>with ILU(0) but works with ILU(3). I have no option to run Jacobi.<br>When I apply the same problem to PETSc's PC solver with ILU(3) in<br>serial I get KSP_DIVERGED_INDEFINITE_PC<br></blockquote><br>Does your in-house ILU(3) use a different ordering? What shift scheme<br>does it use?<br><br><blockquote type="cite">on the first iteration (in MPI the solution somewhat converges but<br>very slowly).<br><br>call KSPGetPC(Pksp,Ppc,Pierr)<br>call PCSetType(Ppc,PCILU,Pierr)<br>call PCFactorSetLevels(Ppc,3,Pierr)<br><br>This effectively changes the fill level from 0 to 3, right?<br></blockquote><br>This only works in serial. Check the -ksp_view output to see what is<br>done. You should just call KSPSetFromOptions() and use run-time options<br>to configure the solver. You can do it from code later, but writing<br>code is slow to figure out what works.<br><br></blockquote><br></blockquote></div><br></div></body></html>