[petsc-users] GPU implementation of serial smoothers

Barry Smith bsmith at petsc.dev
Tue Jan 10 14:44:47 CST 2023


  The default is some kind of Jacobi plus Chebyshev, for a certain class of problems, it is quite good.



> On Jan 10, 2023, at 3:31 PM, Mark Lohry <mlohry at gmail.com> wrote:
> 
> So what are people using for GAMG configs on GPU? I was hoping petsc today would be performance competitive with AMGx but it sounds like that's not the case?
> 
> On Tue, Jan 10, 2023 at 3:03 PM Jed Brown <jed at jedbrown.org <mailto:jed at jedbrown.org>> wrote:
>> Mark Lohry <mlohry at gmail.com <mailto:mlohry at gmail.com>> writes:
>> 
>> > I definitely need multigrid. I was under the impression that GAMG was
>> > relatively cuda-complete, is that not the case? What functionality works
>> > fully on GPU and what doesn't, without any host transfers (aside from
>> > what's needed for MPI)?
>> >
>> > If I use -ksp-pc_type gamg -mg_levels_pc_type pbjacobi -mg_levels_ksp_type
>> > richardson is that fully on device, but -mg_levels_pc_type ilu or
>> > -mg_levels_pc_type sor require transfers?
>> 
>> You can do `-mg_levels_pc_type ilu`, but it'll be extremely slow (like 20x slower than an operator apply). One can use Krylov smoothers, though that's more synchronization. Automatic construction of operator-dependent multistage smoothers for linear multigrid (because Chebyshev only works for problems that have eigenvalues near the real axis) is something I've wanted to develop for at least a decade, but time is always short. I might put some effort into p-MG with such smoothers this year as we add DDES to our scale-resolving compressible solver.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20230110/57160666/attachment.html>


More information about the petsc-users mailing list