[petsc-users] GAMG parallel convergence sensitivity
Mark Lohry
mlohry at gmail.com
Wed Mar 13 20:27:30 CDT 2019
Thanks Mark.
This makes it hard to give much advice. You really just need to test things
> and use what works best.
>
Yeah, arriving at the current setup was the result of a lot of rather
aimless testing and trial and error.
I see you are using 20 smoothing steps. That is very high. Generally you
> want to use the v-cycle more (ie, lower number of smoothing steps and more
> iterations).
>
This was partly from seeing a lot of cases that needed far too many outer
gmres iterations / orthogonalization directions, and trying to coerce AMG
into doing more work per cycle.
You are beyond what AMG is designed for. If you press this problem it will
> break any solver and will break generic AMG relatively early.
For what it's worth, I'm regularly solving much larger problems (1M-100M
unknowns, unsteady) with this discretization and AMG setup on 500+ cores
with impressively great convergence, dramatically better than ILU/ASM. This
just happens to be the first time I've experimented with this extremely low
Mach number, which is known to have a whole host of issues and generally
needs low-mach preconditioners, I was just a bit surprised by this specific
failure mechanism.
Thanks for the point on jacobi v bjacobi.
On Wed, Mar 13, 2019 at 9:00 PM Mark Adams <mfadams at lbl.gov> wrote:
>
>>
>> Any thoughts here? Is there anything obviously wrong with my setup?
>>
>
> Fast and robust solvers for NS require specialized methods that are not
> provided in PETSc and the methods tend to require tighter integration with
> the meshing and discretization than the algebraic interface supports.
>
> I see you are using 20 smoothing steps. That is very high. Generally you
> want to use the v-cycle more (ie, lower number of smoothing steps and more
> iterations).
>
> And, full MG is a bit tricky. I would not use it, but if it helps, fine.
>
>
>> Any way to reduce the dependence of the convergence iterations on the
>> parallelism?
>>
>
> This comes from the bjacobi smoother. Use jacobi and you will not have a
> parallelism problem and you have bjacobi in the limit of parallelism.
>
>
>> -- obviously I expect the iteration count to be higher in parallel, but I
>> didn't expect such catastrophic failure.
>>
>>
> You are beyond what AMG is designed for. If you press this problem it will
> break any solver and will break generic AMG relatively early.
>
> This makes it hard to give much advice. You really just need to test
> things and use what works best. There are special purpose methods that you
> can implement in PETSc but that is a topic for a significant project.
>
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20190313/a5ad2b09/attachment-0001.html>
More information about the petsc-users
mailing list