<div dir="ltr"><div><div><div><div>Hi Matt<br><br></div>I tried to run ex62 with 1 proc (petsc 3.7.2), but it all produces zero<br><br></div>The output is:<br></div>hbui@bermuda:~/workspace/petsc/snes$ es$ ./ex62 run_type full -bc_type dirichlet -refinement_limit 0.00625 -interpolate 1 -snes_monitor_short -snes_converged_reason -snes_view -ksp_type fgmres -ksp_gmres_restart 100 -ksp_rtol 1.0e-9 -ksp_monitor_short -pc_type fieldsplit -pc_fieldsplit_type schur -pc_fieldsplit_schur_factorization_type full -fieldsplit_velocity_ksp_type gmres -fieldsplit_velocity_pc_type lu -fieldsplit_pressure_ksp_rtol 1e-10 -fieldsplit_pressure_pc_type jacobi<br> 0 SNES Function norm 0.265165 <br> 0 KSP Residual norm 0.265165 <br>Nonlinear solve did not converge due to DIVERGED_LINEAR_SOLVE iterations 0<br>SNES Object: 1 MPI processes<br> type: newtonls<br> maximum iterations=50, maximum function evaluations=10000<br> tolerances: relative=1e-08, absolute=1e-50, solution=1e-08<br> total number of linear solver iterations=0<br> total number of function evaluations=1<br> norm schedule ALWAYS<br> SNESLineSearch Object: 1 MPI processes<br> type: bt<br> interpolation: cubic<br> alpha=1.000000e-04<br> maxstep=1.000000e+08, minlambda=1.000000e-12<br> tolerances: relative=1.000000e-08, absolute=1.000000e-15, lambda=1.000000e-08<br> maximum iterations=40<br> KSP Object: 1 MPI processes<br> type: fgmres<br> GMRES: restart=100, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement<br> GMRES: happy breakdown tolerance 1e-30<br> maximum iterations=10000, initial guess is zero<br> tolerances: relative=1e-09, absolute=1e-50, divergence=10000.<br> right preconditioning<br> using UNPRECONDITIONED norm type for convergence test<br> PC Object: 1 MPI processes<br> type: fieldsplit<br> FieldSplit with Schur preconditioner, factorization FULL<br> Preconditioner for the Schur complement formed from A11<br> Split info:<br> Split number 0 Defined by IS<br> Split number 1 Defined by IS<br> KSP solver for A00 block<br> KSP Object: (fieldsplit_velocity_) 1 MPI processes<br> type: gmres<br> GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement<br> GMRES: happy breakdown tolerance 1e-30<br> maximum iterations=10000, initial guess is zero<br> tolerances: relative=1e-05, absolute=1e-50, divergence=10000.<br> left preconditioning<br> using PRECONDITIONED norm type for convergence test<br> PC Object: (fieldsplit_velocity_) 1 MPI processes<br> type: lu<br> LU: out-of-place factorization<br> tolerance for zero pivot 2.22045e-14<br> matrix ordering: nd<br> factor fill ratio given 5., needed 1.<br> Factored matrix follows:<br> Mat Object: 1 MPI processes<br> type: seqaij<br> rows=512, cols=512, bs=2<br> package used to perform factorization: petsc<br> total: nonzeros=1024, allocated nonzeros=1024<br> total number of mallocs used during MatSetValues calls =0<br> using I-node routines: found 256 nodes, limit used is 5<br> linear system matrix = precond matrix:<br> Mat Object: (fieldsplit_velocity_) 1 MPI processes<br> type: seqaij<br> rows=512, cols=512, bs=2<br> total: nonzeros=1024, allocated nonzeros=1024<br> total number of mallocs used during MatSetValues calls =0<br> using I-node routines: found 256 nodes, limit used is 5<br> KSP solver for S = A11 - A10 inv(A00) A01 <br> KSP Object: (fieldsplit_pressure_) 1 MPI processes<br> type: gmres<br> GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement<br> GMRES: happy breakdown tolerance 1e-30<br> maximum iterations=10000, initial guess is zero<br> tolerances: relative=1e-10, absolute=1e-50, divergence=10000.<br> left preconditioning<br> using PRECONDITIONED norm type for convergence test<br> PC Object: (fieldsplit_pressure_) 1 MPI processes<br> type: jacobi<br> linear system matrix followed by preconditioner matrix:<br> Mat Object: (fieldsplit_pressure_) 1 MPI processes<br> type: schurcomplement<br> rows=256, cols=256<br> has attached null space<br> Schur complement A11 - A10 inv(A00) A01<br> A11<br> Mat Object: (fieldsplit_pressure_) 1 MPI processes<br> type: seqaij<br> rows=256, cols=256<br> total: nonzeros=256, allocated nonzeros=256<br> total number of mallocs used during MatSetValues calls =0<br> has attached null space<br> not using I-node routines<br> A10<br> Mat Object: 1 MPI processes<br> type: seqaij<br> rows=256, cols=512<br> total: nonzeros=512, allocated nonzeros=512<br> total number of mallocs used during MatSetValues calls =0<br> not using I-node routines<br> KSP of A00<br> KSP Object: (fieldsplit_velocity_) 1 MPI processes<br> type: gmres<br> GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement<br> GMRES: happy breakdown tolerance 1e-30<br> maximum iterations=10000, initial guess is zero<br> tolerances: relative=1e-05, absolute=1e-50, divergence=10000.<br> left preconditioning<br> using PRECONDITIONED norm type for convergence test<br> PC Object: (fieldsplit_velocity_) 1 MPI processes<br> type: lu<br> LU: out-of-place factorization<br> tolerance for zero pivot 2.22045e-14<br> matrix ordering: nd<br> factor fill ratio given 5., needed 1.<br> Factored matrix follows:<br> Mat Object: 1 MPI processes<br> type: seqaij<br> rows=512, cols=512, bs=2<br> package used to perform factorization: petsc<br> total: nonzeros=1024, allocated nonzeros=1024<br> total number of mallocs used during MatSetValues calls =0<br> using I-node routines: found 256 nodes, limit used is 5<br> linear system matrix = precond matrix:<br> Mat Object: (fieldsplit_velocity_) 1 MPI processes<br> type: seqaij<br> rows=512, cols=512, bs=2<br> total: nonzeros=1024, allocated nonzeros=1024<br> total number of mallocs used during MatSetValues calls =0<br> using I-node routines: found 256 nodes, limit used is 5<br> A01<br> Mat Object: 1 MPI processes<br> type: seqaij<br> rows=512, cols=256, rbs=2, cbs = 1<br> total: nonzeros=512, allocated nonzeros=512<br> total number of mallocs used during MatSetValues calls =0<br> using I-node routines: found 256 nodes, limit used is 5<br> Mat Object: (fieldsplit_pressure_) 1 MPI processes<br> type: seqaij<br> rows=256, cols=256<br> total: nonzeros=256, allocated nonzeros=256<br> total number of mallocs used during MatSetValues calls =0<br> has attached null space<br> not using I-node routines<br> linear system matrix = precond matrix:<br> Mat Object: 1 MPI processes<br> type: seqaij<br> rows=768, cols=768<br> total: nonzeros=2304, allocated nonzeros=2304<br> total number of mallocs used during MatSetValues calls =0<br> has attached null space<br> using I-node routines: found 256 nodes, limit used is 5<br>Number of SNES iterations = 0<br>L_2 Error: 1.01 [0.929, 0.407]<br>Solution<br>Vec Object: 1 MPI processes<br> type: seq<br>0.<br>0.<br>....<br><br></div>Am I doing something wrong?<br><div><div><br></div><div>Giang<br><br></div></div></div><div class="gmail_extra"><br clear="all"><div><div class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr">Giang</div></div></div>
<br><div class="gmail_quote">On Tue, May 3, 2016 at 4:44 AM, Matthew Knepley <span dir="ltr"><<a href="mailto:knepley@gmail.com" target="_blank">knepley@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><span class="">On Mon, May 2, 2016 at 8:29 PM, <a href="mailto:ztdepyahoo@163.com" target="_blank">ztdepyahoo@163.com</a> <span dir="ltr"><<a href="mailto:ztdepyahoo@163.com" target="_blank">ztdepyahoo@163.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div>
<div><span></span><div><div dir="ltr">Dear professor:</div></div><div dir="ltr"> I want to write a parallel 3D CFD code based on unstructred grid, does Petsc has DMPlex examples to start with.</div><div dir="ltr"></div></div></div></blockquote><div><br></div></span><div>SNES ex62 is an unstructured grid Stokes problem discretized with low-order finite elements.</div><div><br></div><div>Of course, all the different possible choices will impact the design.</div><div><br></div><div> Matt</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><div><div dir="ltr">Regards</div></div><span class="HOEnZb"><font color="#888888">
</font></span></div></blockquote></div><span class="HOEnZb"><font color="#888888"><br><br clear="all"><div><br></div>-- <br><div>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener</div>
</font></span></div></div>
</blockquote></div><br></div>