[petsc-users] GPU implementation of serial smoothers

Stefano Zampini stefano.zampini at gmail.com
Tue Jan 10 14:59:50 CST 2023


DILU in openfoam is our block Jacobi ilu subdomain solvers

On Tue, Jan 10, 2023, 23:45 Barry Smith <bsmith at petsc.dev> wrote:

>
>   The default is some kind of Jacobi plus Chebyshev, for a certain class
> of problems, it is quite good.
>
>
>
> On Jan 10, 2023, at 3:31 PM, Mark Lohry <mlohry at gmail.com> wrote:
>
> So what are people using for GAMG configs on GPU? I was hoping petsc today
> would be performance competitive with AMGx but it sounds like that's not
> the case?
>
> On Tue, Jan 10, 2023 at 3:03 PM Jed Brown <jed at jedbrown.org> wrote:
>
>> Mark Lohry <mlohry at gmail.com> writes:
>>
>> > I definitely need multigrid. I was under the impression that GAMG was
>> > relatively cuda-complete, is that not the case? What functionality works
>> > fully on GPU and what doesn't, without any host transfers (aside from
>> > what's needed for MPI)?
>> >
>> > If I use -ksp-pc_type gamg -mg_levels_pc_type pbjacobi
>> -mg_levels_ksp_type
>> > richardson is that fully on device, but -mg_levels_pc_type ilu or
>> > -mg_levels_pc_type sor require transfers?
>>
>> You can do `-mg_levels_pc_type ilu`, but it'll be extremely slow (like
>> 20x slower than an operator apply). One can use Krylov smoothers, though
>> that's more synchronization. Automatic construction of operator-dependent
>> multistage smoothers for linear multigrid (because Chebyshev only works for
>> problems that have eigenvalues near the real axis) is something I've wanted
>> to develop for at least a decade, but time is always short. I might put
>> some effort into p-MG with such smoothers this year as we add DDES to our
>> scale-resolving compressible solver.
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20230110/a3fcc342/attachment-0001.html>


More information about the petsc-users mailing list