<div dir="ltr"><div>Thanks Stefano and Chonglin!</div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div>DILU in openfoam is our block Jacobi ilu subdomain solvers <br></div></blockquote><div><br></div><div> are you saying that -pc_type gang -mg_levels_pc_type -mg_levels_ksp_type richardson gives you something exactly equivalent to DILU?<br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, Jan 10, 2023 at 4:04 PM Zhang, Chonglin <<a href="mailto:zhangc20@rpi.edu">zhangc20@rpi.edu</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div>
I am using the following in my Poisson solver running on GPU, which were suggested by Barry and Mark (Dr. Mark Adams).
<div> -ksp_type cg</div>
<div> -pc_type gamg</div>
<div> -mg_levels_ksp_type chebyshev</div>
<div> -mg_levels_pc_type jacobi</div>
<div>
<div>
<blockquote type="cite">
<div>On Jan 10, 2023, at 3:31 PM, Mark Lohry <<a href="mailto:mlohry@gmail.com" target="_blank">mlohry@gmail.com</a>> wrote:</div>
<br>
<div>
<div dir="ltr">
<div></div>
<div>So what are people using for GAMG configs on GPU? I was hoping petsc today would be performance competitive with AMGx but it sounds like that's not the case?<br>
</div>
</div>
<br>
<div class="gmail_quote">
<div dir="ltr" class="gmail_attr">On Tue, Jan 10, 2023 at 3:03 PM Jed Brown <<a href="mailto:jed@jedbrown.org" target="_blank">jed@jedbrown.org</a>> wrote:<br>
</div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
Mark Lohry <<a href="mailto:mlohry@gmail.com" target="_blank">mlohry@gmail.com</a>> writes:<br>
<br>
> I definitely need multigrid. I was under the impression that GAMG was<br>
> relatively cuda-complete, is that not the case? What functionality works<br>
> fully on GPU and what doesn't, without any host transfers (aside from<br>
> what's needed for MPI)?<br>
><br>
> If I use -ksp-pc_type gamg -mg_levels_pc_type pbjacobi -mg_levels_ksp_type<br>
> richardson is that fully on device, but -mg_levels_pc_type ilu or<br>
> -mg_levels_pc_type sor require transfers?<br>
<br>
You can do `-mg_levels_pc_type ilu`, but it'll be extremely slow (like 20x slower than an operator apply). One can use Krylov smoothers, though that's more synchronization. Automatic construction of operator-dependent multistage smoothers for linear multigrid
(because Chebyshev only works for problems that have eigenvalues near the real axis) is something I've wanted to develop for at least a decade, but time is always short. I might put some effort into p-MG with such smoothers this year as we add DDES to our
scale-resolving compressible solver.<br>
</blockquote>
</div>
</div>
</blockquote>
</div>
<br>
</div>
</div>
</blockquote></div>