[petsc-users] [EXTERNAL] GPU implementation of serial smoothers
Zhang, Chonglin
zhangc20 at rpi.edu
Tue Jan 10 15:03:58 CST 2023
I am using the following in my Poisson solver running on GPU, which were suggested by Barry and Mark (Dr. Mark Adams).
-ksp_type cg
-pc_type gamg
-mg_levels_ksp_type chebyshev
-mg_levels_pc_type jacobi
On Jan 10, 2023, at 3:31 PM, Mark Lohry <mlohry at gmail.com> wrote:
So what are people using for GAMG configs on GPU? I was hoping petsc today would be performance competitive with AMGx but it sounds like that's not the case?
On Tue, Jan 10, 2023 at 3:03 PM Jed Brown <jed at jedbrown.org<mailto:jed at jedbrown.org>> wrote:
Mark Lohry <mlohry at gmail.com<mailto:mlohry at gmail.com>> writes:
> I definitely need multigrid. I was under the impression that GAMG was
> relatively cuda-complete, is that not the case? What functionality works
> fully on GPU and what doesn't, without any host transfers (aside from
> what's needed for MPI)?
>
> If I use -ksp-pc_type gamg -mg_levels_pc_type pbjacobi -mg_levels_ksp_type
> richardson is that fully on device, but -mg_levels_pc_type ilu or
> -mg_levels_pc_type sor require transfers?
You can do `-mg_levels_pc_type ilu`, but it'll be extremely slow (like 20x slower than an operator apply). One can use Krylov smoothers, though that's more synchronization. Automatic construction of operator-dependent multistage smoothers for linear multigrid (because Chebyshev only works for problems that have eigenvalues near the real axis) is something I've wanted to develop for at least a decade, but time is always short. I might put some effort into p-MG with such smoothers this year as we add DDES to our scale-resolving compressible solver.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20230110/b6cc3b8f/attachment.html>
More information about the petsc-users
mailing list