<div dir="ltr"><div dir="ltr"><div dir="ltr">On Tue, Mar 19, 2019 at 7:34 AM Y. Shidi <<a href="mailto:ys453@cam.ac.uk">ys453@cam.ac.uk</a>> wrote:<br></div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">> Perhaps you need a better preconditioner for it.<br>
<br>
Hello Matt,<br>
Thank you for your help.<br>
Yes, I think I need a better preconditioner;<br>
it requires about 600 iterations for Schur complement.<br>
Is there any tutorials for this? Or shall I just<br>
try different combinations and find the most suitable<br>
one?<br></blockquote><div><br></div><div>For incompressible Stokes with a constant viscosity, the Schur complement is spectrally</div><div>equivalent to the weighted mass matrix</div><div><br></div><div> 1/nu M</div><div><br></div><div>Using that for the preconditioning matrix is the first step. It is more complicated for Navier-Stokes,</div><div>but starting with a mass matrix can help, especially for slow flows. For very new approaches to</div><div>NS, see something like</div><div><br></div><div> <a href="https://arxiv.org/abs/1810.03315">https://arxiv.org/abs/1810.03315</a></div><div><br></div><div> Thanks,</div><div><br></div><div> Matt</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
Kind Regards,<br>
Shidi<br>
<br>
On 2019-03-19 11:17, Matthew Knepley wrote:<br>
> On Tue, Mar 19, 2019 at 6:59 AM Y. Shidi via petsc-users<br>
> <<a href="mailto:petsc-users@mcs.anl.gov" target="_blank">petsc-users@mcs.anl.gov</a>> wrote:<br>
> <br>
>> Hello Barry,<br>
>> <br>
>> Thank you for your reply.<br>
>> <br>
>> I reduced the tolerances and get desired solution.<br>
>> <br>
>> I am solving a multiphase incompressible n-s problems and currently<br>
>> we are using augmented lagrangina technique with uzawa iteration.<br>
>> Because the problems are getting larger, we are also looking for<br>
>> some<br>
>> other methods for solving the linear system.<br>
>> I follow pcfieldsplit tutorial from:<br>
>> <br>
> <a href="https://www.mcs.anl.gov/petsc/documentation/tutorials/MSITutorial.pdf" rel="noreferrer" target="_blank">https://www.mcs.anl.gov/petsc/documentation/tutorials/MSITutorial.pdf</a><br>
>> [1]<br>
>> <br>
>> However, it takes about 10s to finish one iteration and overall<br>
>> it requires like 150s to complete one time step with 100k unknowns,<br>
>> which is a long time compared to our current solver 10s for one<br>
>> time step.<br>
> <br>
> The first thing to do is look at how many Schur complement iterations<br>
> you are doing:<br>
> <br>
> -fieldsplit_pressure_ksp_monitor_true_residual<br>
> <br>
> Perhaps you need a better preconditioner for it.<br>
> <br>
> Thanks,<br>
> <br>
> Matt<br>
> <br>
>> I tried the following options:<br>
>> 1).<br>
>> -ksp_type fgmres -pc_type fieldsplit -pc_fieldsplit_type schur<br>
>> -pc_fieldsplit_schur_factorization_type lower<br>
>> -fieldsplit_velocity_ksp_type preonly<br>
>> -fieldsplit_velocity_pc_type gamg<br>
>> -fieldsplit_pressure_ksp_type minres<br>
>> -fieldsplit_pressure_pc_type none<br>
>> 2).<br>
>> -ksp_type fgmres -pc_type fieldsplit -pc_fieldsplit_type schur<br>
>> -pc_fieldsplit_schur_factorization_type diag<br>
>> -fieldsplit_velocity_ksp_type preonly<br>
>> -fieldsplit_velocity_pc_type gamg<br>
>> -fieldsplit_pressure_ksp_type minres<br>
>> -fieldsplit_pressure_pc_type none<br>
>> 3).<br>
>> -ksp_type fgmres -pc_type fieldsplit -pc_fieldsplit_type schur<br>
>> -pc_fieldsplit_schur_factorization_type full<br>
>> -fieldsplit_velocity_ksp_type preonly<br>
>> -fieldsplit_velocity_pc_type lu<br>
>> -fieldsplit_pressure_ksp_rtol 1e-10 -fieldsplit_pressure_pc_type<br>
>> jacobi<br>
>> <br>
>> So I am wondering if there is any other options that can help<br>
>> improve<br>
>> the<br>
>> pcfieldsplit performance.<br>
>> <br>
>> Kind Regards,<br>
>> Shidi<br>
>> <br>
>> On 2019-03-17 00:05, Smith, Barry F. wrote:<br>
>>>> On Mar 16, 2019, at 6:50 PM, Y. Shidi via petsc-users<br>
>>>> <<a href="mailto:petsc-users@mcs.anl.gov" target="_blank">petsc-users@mcs.anl.gov</a>> wrote:<br>
>>>> <br>
>>>> Hello,<br>
>>>> <br>
>>>> I am trying to solve the incompressible n-s equations by<br>
>>>> PCFieldSplit.<br>
>>>> <br>
>>>> The large matrix and vectors are formed by MatCreateNest()<br>
>>>> and VecCreateNest().<br>
>>>> The system is solved directly by the following command:<br>
>>>> -ksp_type fgmres<br>
>>>> -pc_type fieldsplit<br>
>>>> -pc_fieldsplit_type schur<br>
>>>> -pc_fieldsplit_schur_fact_type full<br>
>>>> -ksp_converged_reason<br>
>>>> -ksp_monitor_true_residual<br>
>>>> -fieldsplit_0_ksp_type preonly<br>
>>>> -fieldsplit_0_pc_type cholesky<br>
>>>> -fieldsplit_0_pc_factor_mat_solver_package mumps<br>
>>>> -mat_mumps_icntl_28 2<br>
>>>> -mat_mumps_icntl_29 2<br>
>>>> -fieldsplit_1_ksp_type preonly<br>
>>>> -fieldsplit_1_pc_type jacobi<br>
>>>> Output:<br>
>>>> 0 KSP unpreconditioned resid norm 1.214252932161e+04 true resid<br>
>> norm<br>
>>>> 1.214252932161e+04 ||r(i)||/||b|| 1.000000000000e+00<br>
>>>> 1 KSP unpreconditioned resid norm 1.642782495109e-02 true resid<br>
>> norm<br>
>>>> 1.642782495109e-02 ||r(i)||/||b|| 1.352916226594e-06<br>
>>>> Linear solve converged due to CONVERGED_RTOL iterations 1<br>
>>>> <br>
>>>> The system is solved iteratively by the following command:<br>
>>>> -ksp_type fgmres<br>
>>>> -pc_type fieldsplit<br>
>>>> -pc_fieldsplit_type schur<br>
>>>> -pc_fieldsplit_schur_factorization_type diag<br>
>>>> -ksp_converged_reason<br>
>>>> -ksp_monitor_true_residual<br>
>>>> -fieldsplit_0_ksp_type preonly<br>
>>>> -fieldsplit_0_pc_type gamg<br>
>>>> -fieldsplit_1_ksp_type minres<br>
>>>> -fieldsplit_1_pc_type none<br>
>>>> Output:<br>
>>>> 0 KSP unpreconditioned resid norm 1.214252932161e+04 true resid<br>
>> norm<br>
>>>> 1.214252932161e+04 ||r(i)||/||b|| 1.000000000000e+00<br>
>>>> 1 KSP unpreconditioned resid norm 2.184037364915e+02 true resid<br>
>> norm<br>
>>>> 2.184037364915e+02 ||r(i)||/||b|| 1.798667565109e-02<br>
>>>> 2 KSP unpreconditioned resid norm 2.120097409539e+02 true resid<br>
>> norm<br>
>>>> 2.120097409635e+02 ||r(i)||/||b|| 1.746009709742e-02<br>
>>>> 3 KSP unpreconditioned resid norm 4.364091658268e+01 true resid<br>
>> norm<br>
>>>> 4.364091658575e+01 ||r(i)||/||b|| 3.594054865332e-03<br>
>>>> 4 KSP unpreconditioned resid norm 2.632671796885e+00 true resid<br>
>> norm<br>
>>>> 2.632671797020e+00 ||r(i)||/||b|| 2.168141189773e-04<br>
>>>> 5 KSP unpreconditioned resid norm 2.209213998004e+00 true resid<br>
>> norm<br>
>>>> 2.209213980361e+00 ||r(i)||/||b|| 1.819401808180e-04<br>
>>>> 6 KSP unpreconditioned resid norm 4.683775185840e-01 true resid<br>
>> norm<br>
>>>> 4.683775085753e-01 ||r(i)||/||b|| 3.857330677735e-05<br>
>>>> 7 KSP unpreconditioned resid norm 3.042503284736e-02 true resid<br>
>> norm<br>
>>>> 3.042503349258e-02 ||r(i)||/||b|| 2.505658638883e-06<br>
>>>> <br>
>>>> <br>
>>>> Both methods give answers, but they are different<br>
>>> <br>
>>> What do you mean the answers are different? Do you mean the<br>
>>> solution x from KSPSolve() is different? How are you calculating<br>
>> their<br>
>>> difference and how different are they?<br>
>>> <br>
>>> Since the solutions are only approximate; true residual norm<br>
>> is<br>
>>> around 1.642782495109e-02 and 3.042503349258e-02 for the two<br>
>>> different solvers there will only be a certain number of<br>
>> identical<br>
>>> digits in the two solutions (which depends on the condition<br>
>> number of<br>
>>> the original matrix). You can run both solvers with -ksp_rtol<br>
>> 1.e-12<br>
>>> and then (assuming everything is working correctly) the two<br>
>> solutions<br>
>>> will be much closer to each other.<br>
>>> <br>
>>> Barry<br>
>>> <br>
>>>> so I am wondering<br>
>>>> if it is possible that you can help me figure out which part I<br>
>> am<br>
>>>> doing wrong.<br>
>>>> <br>
>>>> Thank you for your time.<br>
>>>> <br>
>>>> Kind Regards,<br>
>>>> Shidi<br>
> <br>
> --<br>
> <br>
> What most experimenters take for granted before they begin their<br>
> experiments is infinitely more interesting than any results to which<br>
> their experiments lead.<br>
> -- Norbert Wiener<br>
> <br>
> <a href="https://www.cse.buffalo.edu/~knepley/" rel="noreferrer" target="_blank">https://www.cse.buffalo.edu/~knepley/</a> [2]<br>
> <br>
> <br>
> Links:<br>
> ------<br>
> [1] <br>
> <a href="https://www.mcs.anl.gov/petsc/documentation/tutorials/MSITutorial.pdf" rel="noreferrer" target="_blank">https://www.mcs.anl.gov/petsc/documentation/tutorials/MSITutorial.pdf</a><br>
> [2] <a href="http://www.cse.buffalo.edu/~knepley/" rel="noreferrer" target="_blank">http://www.cse.buffalo.edu/~knepley/</a><br>
</blockquote></div><br clear="all"><div><br></div>-- <br><div dir="ltr" class="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener</div><div><br></div><div><a href="http://www.cse.buffalo.edu/~knepley/" target="_blank">https://www.cse.buffalo.edu/~knepley/</a><br></div></div></div></div></div></div></div></div></div>