[petsc-users] troubleshooting AMG, coupled navier stokes, large eigenvalues on coarsest level

Mark Lohry mlohry at gmail.com
Wed Nov 8 20:42:34 CST 2017


>
>
>>> You want to use the Schur complement PCs for NS. We have some support
>>> for PC where you give us a mass matrix.
>>
>>
>> I'll look into it, but I'm not aware of Schur complement being used for
>> compressible/coupled NS, only for incompressible/segregated.
>>
>
>
> Really? OK, you've looked into it more than me.
>

Correct me if I'm wrong, I haven't looked into it much at all. I just
haven't encountered Schur complement in CFD outside of segregated
SIMPLE-type algorithms (like in the ex70/PCFIELDSPLIT example.) Those
segregated pressure-correction methods tend to get progressively worse as
you become less elliptic / non-zero Mach.

sor


Makes sense, explicit RK methods are pretty decent smoothers for the
nonlinear geometric multigrid approach. Thanks for the help, hopefully at
least using proper smoothers in GAMG gets me a non-wrong preconditioner.


On Wed, Nov 8, 2017 at 8:40 PM, Mark Adams <mfadams at lbl.gov> wrote:

>
>
> On Wed, Nov 8, 2017 at 8:02 PM, Mark Lohry <mlohry at gmail.com> wrote:
>
>> Do you mean the element Jacobian matrix has 35 "vertices"? You have 5x5
>>> dense blocks (or at least you don't care about resolving any sparsity or
>>> you want to use block preconditioners. So your element Jacobians are
>>> [35x5]^2 dense(ish) matrices. This is not particularly useful for the
>>> solvers.
>>
>>
>> Yes, the local element jacobians are (35x5)^2 ( (N+1)(N+2)(N+3)/6
>> vertices per Nth order element for 3D). They'd be 100% dense. So the real
>> blocks of the full system jacobian are of that size, and not 5x5. Block ILU
>> is pretty popular for these discretizations for obvious reasons.
>>
>
> OK, but don't set that blocks size.
>
>
>>
>> This is on the full NS problem I assume. You need special solvers for
>>> this. Scalable solvers for NS is not trivial.
>>
>>
>> Yes, compressible 3D NS. I'm aware scaling isn't trivial, but this isn't
>> a scaling problem, I'm obviously computing a wrong preconditioner since
>> it's actively hurting the linear solve.
>>
>
> By scaling I mean not a direct solver.
>
>
>>
>> Yes, you should use Gauss-Seidel for non-symmetric systems. Or indefinite
>>> systems
>>
>>
>> Thanks!
>>
>
> sor
>
>
>>
>> You want to use the Schur complement PCs for NS. We have some support
>>> for PC where you give us a mass matrix.
>>
>>
>> I'll look into it, but I'm not aware of Schur complement being used for
>> compressible/coupled NS, only for incompressible/segregated.
>>
>
>
> Really? OK, you've looked into it more than me.
>
> This is DG where the mass matrix is a block diagonal constant you
>> typically just invert once at the beginning; is there a PC where this can
>> be exploited?
>>
>
> Don't know but being able to explicitly invert this might let you compute
> an exact Schur complement matrix and then use algebraic solvers (like LU
> and AMG).
>
>
>>
>>
>> On Wed, Nov 8, 2017 at 7:04 PM, Mark Adams <mfadams at lbl.gov> wrote:
>>
>>>
>>>
>>> On Tue, Nov 7, 2017 at 3:00 PM, Mark Lohry <mlohry at gmail.com> wrote:
>>>
>>>> I've now got gamg running on matrix-free newton-krylov with the
>>>> jacobian provided by coloring finite differences (thanks again for the
>>>> help). 3D Poisson with 4th order DG or higher (35^2 blocks), gamg with
>>>> default settings is giving textbook convergence, which is great. Of course
>>>> coupled compressible navier-stokes is harder, and convergence is
>>>> bad-to-nonexistent.
>>>>
>>>>
>>>> 1) Doc says "Equations must be ordered in “vertex-major” ordering"; in
>>>> my discretization, each "node" has 5 coupled degrees of freedom (density, 3
>>>> x momentum, energy). I'm ordering my unknowns:
>>>>
>>>> rho_i, rhou_i, rhov_i, rhow_i, Et_i, rho_i+1, rhou_i+1, ...  e.g.
>>>> row-major matrix order if you wrote the unknowns [{rho}, {rhou}, ... ].
>>>>
>>>> and then setting block size to 5. Is that correct?
>>>>
>>>
>>> yes
>>>
>>>
>>>> I've also tried using the actual sparsity of the matrix
>>>>
>>>
>>> Sorry, but using for what?
>>>
>>>
>>>> which has larger dense blocks (e.g. [35x5]^2), but neither seemed to
>>>> help.
>>>>
>>>
>>> Do you mean the element Jacobian matrix has 35 "vertices"? You have 5x5
>>> dense blocks (or at least you don't care about resolving any sparsity or
>>> you want to use block preconditioners. So your element Jacobians are
>>> [35x5]^2 dense(ish) matrices. This is not particularly useful for the
>>> solvers.
>>>
>>>
>>>>
>>>>
>>>> 2) With default settings, and with -pc_gamg_square_graph,
>>>> pc_gamg_sym_graph, agg_nsmooths 0 mentioned in the manual, the eigenvalue
>>>> estimates explode on the coarsest level, which I don't see with poisson:
>>>>
>>>>   Down solver (pre-smoother) on level 1 -------------------------------
>>>>     KSP Object: (mg_levels_1_) 32 MPI processes
>>>>       type: chebyshev
>>>>         eigenvalue estimates used:  min = 0.18994, max = 2.08935
>>>>         eigenvalues estimate via gmres min 0.00933256, max 1.8994
>>>>   Down solver (pre-smoother) on level 2 -------------------------------
>>>>     KSP Object: (mg_levels_2_) 32 MPI processes
>>>>       type: chebyshev
>>>>         eigenvalue estimates used:  min = 0.165969, max = 1.82566
>>>>         eigenvalues estimate via gmres min 0.0290728, max 1.65969
>>>>   Down solver (pre-smoother) on level 3 -------------------------------
>>>>     KSP Object: (mg_levels_3_) 32 MPI processes
>>>>       type: chebyshev
>>>>         eigenvalue estimates used:  min = 0.146479, max = 1.61126
>>>>         eigenvalues estimate via gmres min 0.204673, max 1.46479
>>>>   Down solver (pre-smoother) on level 4 -------------------------------
>>>>     KSP Object: (mg_levels_4_) 32 MPI processes
>>>>       type: chebyshev
>>>>         eigenvalue estimates used:  min = 6.81977e+09, max = 7.50175e+10
>>>>         eigenvalues estimate via gmres min -2.76436e+12, max 6.81977e+10
>>>>
>>>> What's happening here? (Full -ksp_view below)
>>>>
>>>
>>> This is on the full NS problem I assume. You need special solvers for
>>> this. Scalable solvers for NS is not trivial.
>>>
>>>
>>>>
>>>> 3) I'm not very familiar with chebyshev smoothers, but they're only for
>>>> SPD systems (?).
>>>>
>>>
>>> Yes, you should use Gauss-Seidel for non-symmetric systems. Or
>>> indefinite systems
>>>
>>>
>>>> Is this an inappropriate smoother for this problem?
>>>>
>>>> 4) With gmres, the preconditioned residual is ~10 orders larger than
>>>> the true residual; and the preconditioned residual drops while the true
>>>> residual rises. I'm assuming this means something very wrong?
>>>>
>>>
>>> Yes, your preconditioner is no good.
>>>
>>>
>>>>
>>>> 5) -pc_type hyper -pc_hypre_type boomeramg also works perfectly for the
>>>> poisson case, but hits NaN on the first cycle for NS.
>>>>
>>>>
>>>>
>>> You want to use the Schur complement PCs for NS. We have some support
>>> for PC where you give us a mass matrix. These are motivated by assuming
>>> there is a "commutation" (that is not true). These were developed by, among
>>> others, Andy Wathan, if you want to do a literature search. You will
>>> probably have to read the literature to understand the options and issues
>>> for your problem. Look at the PETSc manual and you should find a
>>> description of PETSc support for these PCs and some discussion of them and
>>> references. This should get you started.
>>>
>>> Mark
>>>
>>>
>>>
>>>
>>>
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20171108/7c348960/attachment-0001.html>


More information about the petsc-users mailing list