[petsc-users] Starting point for Stokes fieldsplit
Max Rudolph
maxwellr at gmail.com
Mon Feb 20 17:34:00 CST 2012
On Mon, Feb 20, 2012 at 3:17 PM, Matthew Knepley <knepley at gmail.com> wrote:
> On Mon, Feb 20, 2012 at 4:36 PM, Max Rudolph <rudolph at berkeley.edu> wrote:
>
>> Matt,
>>
>> Thank you for your help.
>>
>>
>> On Mon, Feb 20, 2012 at 2:05 PM, Max Rudolph <rudolph at berkeley.edu <https://lists.mcs.anl.gov/mailman/listinfo/petsc-users>> wrote:
>>
>> >* Hi Dave,*>* Thanks for your help.*>**>* Max*>**>* Hey Max,*>**>* Without knowing anything about the specific application related to*>* your Stokes problem, or information about the mesh you are using, I*>* have a couple of questions and suggestions which might help.*>**>**>* The test case that I am working with is isoviscous convection, benchmark*>* case 1a from Blankenbach 1989.*>**>* 1) If A, is your stokes operator A = ( K,B ; B^T, 0 ), what is your*>* precondition operator?*>* Specifically, what is in the (2,2) slot in the precondioner? - i.e.*>* what matrix are you you applying -stokes_fieldsplit_1_pc_type jacobi*>* -stokes_fieldsplit_1_ksp_type preonly to?*>* Is it the identity as in the SpeedUp notes?*>**>**>* I think that this is the problem. The (2,2) slot in the LHS matrix is all*>* zero (pressure does not appear in the continuity equation), so I think that*>* the preconditioner is meaningless. I am still confused as to why this*>* choice of preconditioner was suggested in the tutorial, and what is a*>* better choice of preconditioner for this block? Should I be using one of*>* the Schur complement methods instead of the additive or multiplicative*>* field split?*>**
>> Its not suggested, it is demonstrated. Its the first logical choice, since
>> Jacobi gives the identity for a 0 block (seehttp://www.jstor.org/pss/2158202). Its
>> not meaningless. All the better preconditioners involve either a Schur
>> complement (also shown in the tutorial), or an auxiliary operator which is
>> more
>> difficult to setup and thus not shown.
>>
>>
>> Thank you for clarifying this.
>>
>>
>> >* 2) This choice*>* -stokes_fieldsplit_0_pc_type ml -stokes_fieldsplit_0_ksp_type preonly*>* may simply not be a very effective and degrade the performance of the*>* outer solver.*>* I'd make the solver for the operator in the (1,1) slot much stronger,*>* for example*>* -stokes_fieldsplit_0_ksp_type gmres*>* -stokes_fieldsplit_0_ksp_rtol 1.0e-4*>* -stokes_fieldsplit_0_mg_levels_ksp_type gmres*>* -stokes_fieldsplit_0_mg_levels_pc_type bjacobi*>* -stokes_fieldsplit_0_mg_levels_ksp_max_it 4*>**>* Add a monitor on this solver (-stokes_fieldsplit_0_ksp_XXX) to see how*>* ML is doing.*>**>* 3) Using -stokes_pc_fieldsplit_type MULTIPLICATIVE should reduce the*>* number of outer iterations by a factor of two, but it will use more*>* memory.*>**>* 4) You should use a flexible Krylov method on the outer most solve*>* (-stokes_ksp_XXX) as the preconditioner is varying between each outer*>* iteration. Use -stokes_ksp_type fgmres or -stokes_ksp_type gcr*>**>**>* Thanks for pointing this out. I made that change.*>**>* 5) Depending on how the physical problem is scaled*>* (non-dimensionalised), the size of the residuals associated with the*>* momentum and continuity equation make be quite different. You are*>* currently use the entire residual from (u,p) to determine when to stop*>* iterating. You might want to consider writing a monitor which examines*>* the these residuals independently.*>**>**>* I think that I have scaled the problem correctly. I (slowly) obtain a*>* sufficiently accurate solution using as options only:*>* -stokes_ksp_atol 1e-5 -stokes_ksp_rtol 1e-5*>* -stokes_ksp_monitor_true_residual -stokes_ksp_norm_type UNPRECONDITIONED*>**
>> How do you know the problem is scaled correctly? Have you looked at norms
>> of the residuals for the two systems
>>
>> Thanks,
>>
>> Matt
>>
>>
>> >* Cheers,*>* Dave*
>>
>>
>> Yes, here are the norms computed for the P, X, and Y components,
>> following the last residual that ksp_monitor_true_residual returned:
>>
>> 383 KSP unpreconditioned resid norm 1.121628211019e-03 true resid norm
>> 1.121628224178e-03 ||r(i)||/||b|| 9.626787321554e-10
>> P, X, Y residual norms 5.340336e-02, 4.463404e-02, 2.509621e-02
>>
>
> I am more interested in the initial residuals.
>
Is this the information that you're looking for? I ran with ksp_max_it -1.
The Y-residual below is much larger than the X-or P-residuals, presumably
because of the initial density perturbation and body force. Thanks again
for helping me.
Residual norms for stokes_ solve.
0 KSP unpreconditioned resid norm 1.165111661413e+06 true resid norm
1.165111661413e+06 ||r(i)||/||b|| 1.000000000000e+00
KSP Object:(stokes_) 2 MPI processes
type: gmres
GMRES: restart=30, using Classical (unmodified) Gram-Schmidt
Orthogonalization with no iterative refinement
GMRES: happy breakdown tolerance 1e-30
maximum iterations=-1, initial guess is zero
tolerances: relative=1e-09, absolute=1e-09, divergence=10000
right preconditioning
using UNPRECONDITIONED norm type for convergence test
PC Object:(stokes_) 2 MPI processes
type: none
linear system matrix = precond matrix:
Matrix Object: CmechLHS 2 MPI processes
type: mpiaij
rows=2883, cols=2883
total: nonzeros=199809, allocated nonzeros=199809
total number of mallocs used during MatSetValues calls =0
Matrix Object: CmechLHS 2 MPI processes
type: mpiaij
rows=2883, cols=2883
total: nonzeros=199809, allocated nonzeros=199809
total number of mallocs used during MatSetValues calls =0
using I-node (on process 0) routines: found 496 nodes, limit used
is 5
converged reason -3
stokes residual...
converged reason -3
stokes residual...
X, Y, P residual norms 0.000000e+00, 1.165112e+06, 0.000000e+00
>
> Thanks
>
> Matt
>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120220/cfbed9cd/attachment.htm>
More information about the petsc-users
mailing list