<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">Paul :</div><div class="gmail_quote">Using petsc-dev (we recently added feature for better displaying convergence behavior), I found that <span style="font-size:12.8px">'-sub_pc_factor_mat_ordering_</span><span style="font-size:12.8px">type 1wd' causes zero pivot:</span></div><div class="gmail_quote"><span style="font-size:12.8px"><br></span></div><div class="gmail_quote"><div class="gmail_quote" style="font-size:12.8px">./ex10 -f0 test.mat -rhs 0 -pc_type asm -pc_asm_overlap 12 -sub_pc_type ilu -sub_pc_factor_mat_ordering_type 1wd -sub_pc_factor_levels 4 -ksp_converged_reason</div><div class="gmail_quote" style="font-size:12.8px">Linear solve did not converge due to DIVERGED_PCSETUP_FAILED iterations 0</div><div class="gmail_quote" style="font-size:12.8px"> PCSETUP_FAILED due to SUBPC_ERROR</div><div class="gmail_quote" style="font-size:12.8px">Number of iterations = 0</div><div class="gmail_quote" style="font-size:12.8px"><br></div><div class="gmail_quote" style="font-size:12.8px">adding option '-info |grep zero'</div><div class="gmail_quote" style=""><span style="font-size:12.8px">[0] MatPivotCheck_none(): Detected zero pivot in factorization in row 0 value 0. tolerance 2.22045e-14</span><br></div><div class="gmail_quote" style=""><span style="font-size:12.8px"><br></span></div><div class="gmail_quote" style=""><span style="font-size:12.8px">or '-ksp_error_if_not_converged'</span></div><div class="gmail_quote" style=""><span style="font-size:12.8px"><div class="gmail_quote">[0]PETSC ERROR: Zero pivot in LU factorization: <a href="http://www.mcs.anl.gov/petsc/documentation/faq.html#zeropivot">http://www.mcs.anl.gov/petsc/documentation/faq.html#zeropivot</a></div><div class="gmail_quote">[0]PETSC ERROR: Zero pivot row 0 value 0. tolerance 2.22045e-14</div></span></div><div class="gmail_quote" style=""><span style="font-size:12.8px"><br></span></div><div class="gmail_quote" style="">'-sub_pc_factor_mat_ordering_type natural' avoids it:<br></div><div class="gmail_quote" style=""><div class="gmail_quote">./ex10 -f0 test.mat -rhs 0 -pc_type asm -pc_asm_overlap 12 -sub_pc_type ilu -sub_pc_factor_mat_ordering_type natural -sub_pc_factor_levels 4 -ksp_converged_reason</div><div class="gmail_quote">Linear solve converged due to CONVERGED_RTOL iterations 1</div><div class="gmail_quote">Number of iterations = 1</div><div class="gmail_quote"> Residual norm < 1.e-12</div></div></div><div class="gmail_quote"><span style="font-size:12.8px"><br></span></div><div class="gmail_quote"><span style="font-size:12.8px">Hong</span></div><div class="gmail_quote"><br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr"><div><div><div><div><div><div>Greetings,<br><br></div>We have a test that has started failing upon switching from 3.5.4 to 3.6.0 (actually went straight to 3.6.3 but checked this is repeatable with 3.6.0). I've attached the matrix generated with -mat_view binary and a small PETSc program that runs in serial that reproduces the behavior by loading the matrix and solving a linear system (RHS doesn't matter here). For context, this matrix is the Jacobian of a Taylor-Hood approximation of the axisymmetric incompressible Navier-Stokes equations for flow between concentric cylinders (for which there is an exact solution). The matrix is for a two element case, hopefully small enough for debugging.<br><br></div>Using the following command line options with the test program works with PETSc 3.5.4 and gives a NAN residual with PETSc 3.6.0:<br><br>PETSC_OPTIONS="-pc_type asm -pc_asm_overlap 12 -sub_pc_type ilu -sub_pc_factor_mat_ordering_type 1wd -sub_pc_factor_levels 4"<br><br></div>If I remove the mat ordering option, all is well again in PETSc 3.6.x:<br><br>PETSC_OPTIONS="-pc_type asm -pc_asm_overlap 12 -sub_pc_type ilu -sub_pc_factor_levels 4"<br><br></div>Those options are nothing special. They were arrived at through trial/error to get decent behavior for the solver on up to 4 processors to keep the time to something reasonable for the test suite without getting really fancy. Specifically, we'd noticed this mat ordering on some problems in the test suite behaved noticeably better.</div><div><br></div><div>As always, thanks for your time.<br></div><div><br></div>Best,<br><br></div>Paul<br><br><br></div>
</blockquote></div><br></div></div>