<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">On Thu, Oct 23, 2014 at 9:27 AM, Tabrez Ali <span dir="ltr"><<a href="mailto:stali@geology.wisc.edu" target="_blank">stali@geology.wisc.edu</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<div bgcolor="#FFFFFF" text="#000000">
<div>Matt<br>
<br>
Sorry about that (I always forget it). The output for the smallest
problem is now attached (see log.txt). I am also attaching some
results that compare results obtained using FS/LSC and the direct
solver (MUMPS), again for the smallest problem. The difference, as
you can see is insignificant O(1E-6).<br></div></div></blockquote><div><br></div><div>1) How do you use MUMPS if you have a saddle point</div><div><br></div><div>2) You can see from the output that something is seriously wrong with the preconditioner. It looks like it has a null space.</div><div> Did you add the elastic null modes to GAMG? Without this, it is not going to work. We have helper functions for this:</div><div><br></div><div> <a href="http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DM/DMPlexCreateRigidBody.html">http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DM/DMPlexCreateRigidBody.html</a></div><div><br></div><div>you could just copy that code. And then use</div><div><br></div><div> <a href="http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatSetNearNullSpace.html">http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatSetNearNullSpace.html</a></div><div><br></div><div>I don't see it in the output, so I think this is your problem.</div><div><br></div><div>In order to test, I would first use MUMPS as the A00 solver and get the Schur stuff worked out. Then I would</div><div>replace MUMPS with GAMG and tune it until I get back my original convergence.</div><div><br></div><div> Thanks,</div><div><br></div><div> Matt</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div bgcolor="#FFFFFF" text="#000000"><div>
Also, I did pass 'upper' and 'full' to
'-pc_fieldsplit_schur_factorization_type' but the iteration count
doesn't improve (in fact, it increases slightly). The attached log
is with 'upper'.<br>
<br>
Regards,<br>
<br>
Tabrez<br>
<br>
On 10/23/2014 07:46 AM, Matthew Knepley wrote:<br>
</div>
<blockquote type="cite">
<div dir="ltr">
<div class="gmail_extra">
<div class="gmail_quote">On Thu, Oct 23, 2014 at 7:20 AM,
Tabrez Ali <span dir="ltr"><<a href="mailto:stali@geology.wisc.edu" target="_blank">stali@geology.wisc.edu</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<div bgcolor="#FFFFFF" text="#000000"> Hello<br>
<br>
I am using the following options (below) for solving
linear elasticity/poroelasticity problems involving slip
between two surfaces involving non-trivial geometries,
i.e., elements with high aspect ratios, large contrasts
in material properties etc. The constraints are imposed
using Lagrange Multipliers. <br>
<br>
A picture (shows displacement magnitude) is attached.
The boundary nodes, i.e., the base and the four side are
pinned.<br>
<br>
The following options appear to work well for the saddle
point problem:<br>
<br>
<tt>-pc_type fieldsplit -pc_fieldsplit_type schur
-pc_fieldsplit_detect_saddle_point
-fieldsplit_0_pc_type gamg -fieldsplit_0_ksp_type
preonly -fieldsplit_1_pc_type lsc
-fieldsplit_1_ksp_type preonly
-pc_fieldsplit_schur_fact_type lower -ksp_monitor</tt><br>
<br>
However, the number of iterations keep on increasing
with the problems size (see attached plot), e.g.,<br>
<br>
<tt>120K Tets <b>507</b> Iterations (KSP
Residual norm 8.827362494659e-05)</tt><tt> in 17 secs
on 3 cores<br>
</tt><tt>1 Million Tets <b>1374</b> Iterations (KSP
Residual norm 7.164704416296e-05)</tt><tt> in 117 secs
on 20 cores<br>
</tt><tt>8 Million Tets <b>2495</b> Iterations (KSP
Residual norm 9.101247550026e-05) in 225 secs on 160
cores</tt><br>
<br>
So what other options should I try to improve solver
performance? Any tips/insights would be appreciated as
preconditioning is black magic to me.<br>
</div>
</blockquote>
<div><br>
</div>
<div>For reports, always run with </div>
<div><br>
</div>
<div> -ksp_view -ksp_monitor_true_residual
-ksp_converged_reason</div>
<div><br>
</div>
<div>so that we can see exactly what you used.</div>
<div><br>
</div>
<div>I believe the default is a diagonal factorization.
Since your outer iterates are increasing, I would
strengthen this</div>
<div>to either upper or full</div>
<div><br>
</div>
<div> -pc_fieldsplit_schur_factorization_type <upper,
full></div>
<div><br>
</div>
<div> Thanks,</div>
<div><br>
</div>
<div> Matt </div>
<div> </div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<div bgcolor="#FFFFFF" text="#000000"> Thanks in advance.<span><font color="#888888"><br>
<br>
Tabrez<br>
</font></span></div>
</blockquote>
</div>
<br>
<br clear="all"><span class=""><font color="#888888">
<div><br>
</div>
-- <br>
What most experimenters take for granted before they begin
their experiments is infinitely more interesting than any
results to which their experiments lead.<br>
-- Norbert Wiener </font></span></div>
</div>
</blockquote>
<br>
</div>
</blockquote></div><br><br clear="all"><div><br></div>-- <br>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener
</div></div>