[petsc-users] Fieldsplit with LSC for constrained elasticity/poroelasticity?

Mark Adams mfadams at lbl.gov
Mon Oct 27 13:17:56 CDT 2014


The null space for GAMG is not critical but useful for elasticity.  If you
in fact have an indefinite operator (eg, not "pinned)  the you need to use
an iterative coarse grid solver.  You are using '-fieldsplit_0_pc_type gamg
-fieldsplit_0_ksp_type preonly'.  And you have a hard elasticity problem.
You are going to want to start with a stronger solver. Use cg instead of
preonly.  As Matt said start with MUMPS, then go to CG/GAMG, then you can
see how far you can cut the _0_ solver down.
Mark

On Thu, Oct 23, 2014 at 11:51 AM, Matthew Knepley <knepley at gmail.com> wrote:

> On Thu, Oct 23, 2014 at 10:48 AM, Tabrez Ali <stali at geology.wisc.edu>
> wrote:
>
>>  Matt
>>
>> On 10/23/2014 09:54 AM, Matthew Knepley wrote:
>>
>>  On Thu, Oct 23, 2014 at 9:27 AM, Tabrez Ali <stali at geology.wisc.edu>
>> wrote:
>>
>>>  Matt
>>>
>>> Sorry about that (I always forget it). The output for the smallest
>>> problem is now attached (see log.txt). I am also attaching some results
>>> that compare results obtained using FS/LSC and the direct solver (MUMPS),
>>> again for the smallest problem. The difference, as you can see is
>>> insignificant O(1E-6).
>>>
>>
>>  1) How do you use MUMPS if you have a saddle point
>>
>> I simply used -pc_type lu -pc_factor_mat_solver_package mumps.
>>
>>
>>  2) You can see from the output that something is seriously wrong with
>> the preconditioner. It looks like it has a null space.
>>     Did you add the elastic null modes to GAMG? Without this, it is not
>> going to work. We have helper functions for this:
>>
>>
>> http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DM/DMPlexCreateRigidBody.html
>>
>>  you could just copy that code. And then use
>>
>>
>> http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatSetNearNullSpace.html
>>
>>  I don't see it in the output, so I think this is your problem.
>>
>>  In order to test, I would first use MUMPS as the A00 solver and get the
>> Schur stuff worked out. Then I would
>> replace MUMPS with GAMG and tune it until I get back my original
>> convergence.
>>
>> I will try this with MatNullSpaceCreateRigidBody. Btw does it matter if
>> some nodes are pinned?
>>
>
> No these are null modes of the operator, not of the particular problem.
>
>   Matt
>
>
>> Tabrez
>>
>>
>>    Thanks,
>>
>>      Matt
>>
>>
>>>  Also, I did pass 'upper' and 'full' to
>>> '-pc_fieldsplit_schur_factorization_type' but the iteration count doesn't
>>> improve (in fact, it increases slightly). The attached log is with 'upper'.
>>>
>>> Regards,
>>>
>>> Tabrez
>>>
>>> On 10/23/2014 07:46 AM, Matthew Knepley wrote:
>>>
>>>  On Thu, Oct 23, 2014 at 7:20 AM, Tabrez Ali <stali at geology.wisc.edu>
>>> wrote:
>>>
>>>>  Hello
>>>>
>>>> I am using the following options (below) for solving linear
>>>> elasticity/poroelasticity problems involving slip between two surfaces
>>>> involving non-trivial geometries, i.e., elements with high aspect ratios,
>>>> large contrasts in material properties etc. The constraints are imposed
>>>> using Lagrange Multipliers.
>>>>
>>>> A picture (shows displacement magnitude) is attached. The boundary
>>>> nodes, i.e., the base and the four side are pinned.
>>>>
>>>> The following options appear to work well for the saddle point problem:
>>>>
>>>> -pc_type fieldsplit -pc_fieldsplit_type schur
>>>> -pc_fieldsplit_detect_saddle_point -fieldsplit_0_pc_type gamg
>>>> -fieldsplit_0_ksp_type preonly -fieldsplit_1_pc_type lsc
>>>> -fieldsplit_1_ksp_type preonly -pc_fieldsplit_schur_fact_type lower
>>>> -ksp_monitor
>>>>
>>>> However, the number of iterations keep on increasing with the problems
>>>> size (see attached plot), e.g.,
>>>>
>>>> 120K Tets        *507* Iterations (KSP Residual norm
>>>> 8.827362494659e-05) in  17 secs on   3 cores
>>>> 1 Million Tets  *1374* Iterations (KSP Residual norm
>>>> 7.164704416296e-05) in 117 secs on  20 cores
>>>> 8 Million Tets  *2495* Iterations (KSP Residual norm
>>>> 9.101247550026e-05) in 225 secs on 160 cores
>>>>
>>>> So what other options should I try to improve solver performance? Any
>>>> tips/insights would be appreciated as preconditioning is black magic to me.
>>>>
>>>
>>>  For reports, always run with
>>>
>>>    -ksp_view -ksp_monitor_true_residual -ksp_converged_reason
>>>
>>>  so that we can see exactly what you used.
>>>
>>>  I believe the default is a diagonal factorization. Since your outer
>>> iterates are increasing, I would strengthen this
>>> to either upper or full
>>>
>>>    -pc_fieldsplit_schur_factorization_type <upper, full>
>>>
>>>    Thanks,
>>>
>>>        Matt
>>>
>>>
>>>>  Thanks in advance.
>>>>
>>>> Tabrez
>>>>
>>>
>>>
>>>
>>>  --
>>> What most experimenters take for granted before they begin their
>>> experiments is infinitely more interesting than any results to which their
>>> experiments lead.
>>> -- Norbert Wiener
>>>
>>>
>>>
>>
>>
>>  --
>> What most experimenters take for granted before they begin their
>> experiments is infinitely more interesting than any results to which their
>> experiments lead.
>> -- Norbert Wiener
>>
>>
>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20141027/5683dbb2/attachment-0001.html>


More information about the petsc-users mailing list