<div dir="ltr"><div>Thanks for the link.</div><div><br></div><div>We have set a Schur factorization of type FULL, and we passed it when we run the code with<br></div><div> -pc_fieldsplit_schur_fact_type full</div><div><br></div><div></div><div>Here there is the output of -ksp_view<br></div><div><br></div><div>KSP Object: 1 MPI processes<br> type: fgmres<br> restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement<br> happy breakdown tolerance 1e-30<br> maximum iterations=1, initial guess is zero<br> tolerances: relative=1e-08, absolute=1e-50, divergence=10000.<br> right preconditioning<br> using UNPRECONDITIONED norm type for convergence test<br>PC Object: 1 MPI processes<br> type: fieldsplit<br> FieldSplit with Schur preconditioner, factorization FULL<br> Preconditioner for the Schur complement formed from A11<br> Split info:<br> Split number 0 Defined by IS<br> Split number 1 Defined by IS<br> KSP solver for A00 block<br> KSP Object: (fieldsplit_0_) 1 MPI processes<br> type: gmres<br> restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement<br> happy breakdown tolerance 1e-30<br> maximum iterations=10000, initial guess is zero<br> tolerances: relative=1e-05, absolute=1e-50, divergence=10000.<br> left preconditioning<br> using PRECONDITIONED norm type for convergence test<br> PC Object: (fieldsplit_0_) 1 MPI processes<br> type: ilu<br> out-of-place factorization<br> 0 levels of fill<br> tolerance for zero pivot 2.22045e-14<br> matrix ordering: natural<br> factor fill ratio given 1., needed 1.<br> Factored matrix follows:<br> Mat Object: 1 MPI processes<br> type: seqaij<br> rows=44, cols=44<br> package used to perform factorization: petsc<br> total: nonzeros=482, allocated nonzeros=482<br> total number of mallocs used during MatSetValues calls=0<br> using I-node routines: found 13 nodes, limit used is 5<br> linear system matrix = precond matrix:<br> Mat Object: (fieldsplit_0_) 1 MPI processes<br> type: seqaij<br> rows=44, cols=44<br> total: nonzeros=482, allocated nonzeros=482<br> total number of mallocs used during MatSetValues calls=0<br> using I-node routines: found 13 nodes, limit used is 5<br> KSP solver for S = A11 - A10 inv(A00) A01 <br> KSP Object: (fieldsplit_1_) 1 MPI processes<br> type: gmres<br> restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement<br> happy breakdown tolerance 1e-30<br> maximum iterations=1, initial guess is zero<br> tolerances: relative=1e-09, absolute=1e-50, divergence=10000.<br> left preconditioning<br> using PRECONDITIONED norm type for convergence test<br> PC Object: (fieldsplit_1_) 1 MPI processes<br> type: shell<br> no name<br> linear system matrix followed by preconditioner matrix:<br> Mat Object: (fieldsplit_1_) 1 MPI processes<br> type: schurcomplement<br> rows=20, cols=20<br> Schur complement A11 - A10 inv(A00) A01<br> A11<br> Mat Object: (fieldsplit_1_) 1 MPI processes<br> type: seqaij<br> rows=20, cols=20<br> total: nonzeros=112, allocated nonzeros=112<br> total number of mallocs used during MatSetValues calls=0<br> using I-node routines: found 10 nodes, limit used is 5<br> A10<br> Mat Object: 1 MPI processes<br> type: seqaij<br> rows=20, cols=44<br> total: nonzeros=160, allocated nonzeros=160<br> total number of mallocs used during MatSetValues calls=0<br> using I-node routines: found 10 nodes, limit used is 5<br> KSP of A00<br> KSP Object: (fieldsplit_0_) 1 MPI processes<br> type: gmres<br> restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement<br> happy breakdown tolerance 1e-30<br> maximum iterations=10000, initial guess is zero<br> tolerances: relative=1e-05, absolute=1e-50, divergence=10000.<br> left preconditioning<br> using PRECONDITIONED norm type for convergence test<br> PC Object: (fieldsplit_0_) 1 MPI processes<br> type: ilu<br> out-of-place factorization<br> 0 levels of fill<br> tolerance for zero pivot 2.22045e-14<br> matrix ordering: natural<br> factor fill ratio given 1., needed 1.<br> Factored matrix follows:<br> Mat Object: 1 MPI processes<br> type: seqaij<br> rows=44, cols=44<br> package used to perform factorization: petsc<br> total: nonzeros=482, allocated nonzeros=482<br> total number of mallocs used during MatSetValues calls=0<br> using I-node routines: found 13 nodes, limit used is 5<br> linear system matrix = precond matrix:<br> Mat Object: (fieldsplit_0_) 1 MPI processes<br> type: seqaij<br> rows=44, cols=44<br> total: nonzeros=482, allocated nonzeros=482<br> total number of mallocs used during MatSetValues calls=0<br> using I-node routines: found 13 nodes, limit used is 5<br> A01<br> Mat Object: 1 MPI processes<br> type: seqaij<br> rows=44, cols=20<br> total: nonzeros=156, allocated nonzeros=156<br> total number of mallocs used during MatSetValues calls=0<br> using I-node routines: found 12 nodes, limit used is 5<br> Mat Object: (fieldsplit_1_) 1 MPI processes<br> type: seqaij<br> rows=20, cols=20<br> total: nonzeros=112, allocated nonzeros=112<br> total number of mallocs used during MatSetValues calls=0<br> using I-node routines: found 10 nodes, limit used is 5<br> linear system matrix = precond matrix:<br> Mat Object: 1 MPI processes<br> type: seqaij<br> rows=64, cols=64<br> total: nonzeros=910, allocated nonzeros=2432<br> total number of mallocs used during MatSetValues calls=128<br> using I-node routines: found 23 nodes, limit used is 5</div><div><br></div><div><br></div><div>We would like to understand why the first r.h.s, passed to our function for the Schur preconditioner, is not <br></div><div>b_1-A_10*inv(A_00)*b_0,</div><div>even if we used the full factorization ( without dropping any terms ).</div><div><br></div><div>Thank you,</div><div>Elena<br></div><div><br></div><div><br></div><div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">Il giorno mer 10 feb 2021 alle ore 18:05 Matthew Knepley <<a href="mailto:knepley@gmail.com">knepley@gmail.com</a>> ha scritto:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr">On Wed, Feb 10, 2021 at 11:51 AM Matteo Semplice <<a href="mailto:matteo.semplice@uninsubria.it" target="_blank">matteo.semplice@uninsubria.it</a>> wrote:<br></div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Dear PETSc users,<br>
we are trying to program a preconditioner for the Schur complement <br>
of a Stokes system, but it seems that the r.h.s. for the Schur <br>
complement system differs from what we expect by a scale factor, which <br>
we don't understand.<br>
<br>
Our setup has a system matrix A divided in 2x2 blocks for velocity and <br>
pressure variables. We have programmed our preconditioner in a routine <br>
PrecondSchur and in the main program we do<br>
<br>
PC pc;<br>
KSPGetPC(kspA,&pc);<br>
PCSetFromOptions(pc);<br>
KSPSetOperators(kspA, A, A);<br>
KSPSetInitialGuessNonzero(kspA,PETSC_FALSE);<br>
KSPSetFromOptions(kspA);<br>
KSP *subksp;<br>
PetscInt nfield;<br>
PCSetUp(pc);<br>
PCFieldSplitGetSubKSP(pc, &nfield, &subksp);<br>
PC pcSchur;<br>
KSPGetPC(subksp[1],&pcSchur);<br>
PCSetType(pcSchur,PCSHELL);<br>
PCShellSetApply(pcSchur,PrecondSchur);<br>
KSPSetFromOptions(subksp[1]);<br>
<br>
and eventually<br>
<br>
KSPSolve(A,b,solution);<br>
<br>
We run the code with options<br>
<br>
-ksp_type fgmres \<br>
-pc_type fieldsplit -pc_fieldsplit_type schur \<br>
-pc_fieldsplit_schur_fact_type full \<br>
<br>
and, from reading section 2.3.5 of the PETSc manual, we'd expect that <br>
the first r.h.s. passed to PrecondSchur be exactly<br>
b_1-A_10*inv(A_00)*b_0<br>
<br>
Instead (from a monitor function attached to the subksp[1] solver), the <br>
first r.h.s. appears to be scalar multiple of the above vector; we are <br>
guessing that we should take into account this multiplicative factor in <br>
our preconditioner routine, but we cannot understand where it comes from <br>
and how its value is determined.<br>
<br>
Could you explain us what is going on in the PC_SCHUR exactly, or point <br>
us to some working code example?<br></blockquote><div><br></div><div>1) It is hard to understand solver questions without the output of -ksp_view</div><div><br></div><div>2) The RHS will depend on the kind of factorization you are using for the system</div><div><br></div><div> <a href="https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/PC/PCFieldSplitSetSchurFactType.html#PCFieldSplitSetSchurFactType" target="_blank">https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/PC/PCFieldSplitSetSchurFactType.html#PCFieldSplitSetSchurFactType</a></div><div><br></div><div> I can see which one in the view output</div><div><br></div><div> Thanks,</div><div><br></div><div> Matt</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
Thanks in advance!<br>
<br>
Matteo<br>
<br>
</blockquote></div><br clear="all"><div><br></div>-- <br><div dir="ltr"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener</div><div><br></div><div><a href="http://www.cse.buffalo.edu/~knepley/" target="_blank">https://www.cse.buffalo.edu/~knepley/</a><br></div></div></div></div></div></div></div></div>
</blockquote></div>
<br>
<div style="font-size:1.3em">------------------------</div><span style="font-size:small"><div><img src="http://www.unito.it/sites/default/files/logounito.png"></div><div><span style="font-size:small"><br></span></div>Indirizzo istituzionale di posta elettronica degli studenti e dei laureati dell'Università degli Studi di Torino</span><div><font size="2">Official University of Turin email address for students and graduates </font></div>