[petsc-users] KSP on GPU

Carl-Johan Thore carl-johan.thore at liu.se
Sun Oct 30 10:02:47 CDT 2022


Hi,

I'm solving a topology optimization problem with Stokes flow discretized by a stabilized Q1-Q0 finite element method
and using BiCGStab with the fieldsplit preconditioner to solve the linear systems. The implementation
is based on DMStag, runs on Ubuntu via WSL2, and works fine with PETSc-3.18.1 on multiple CPU cores and the following
options for the preconditioner:

-fieldsplit_0_ksp_type preonly \
-fieldsplit_0_pc_type gamg \
-fieldsplit_0_pc_gamg_reuse_interpolation 0 \
-fieldsplit_1_ksp_type preonly \
-fieldsplit_1_pc_type jacobi

However, when I enable GPU computations by adding two options -

...
-dm_vec_type cuda \
-dm_mat_type aijcusparse \
-fieldsplit_0_ksp_type preonly \
-fieldsplit_0_pc_type gamg \
-fieldsplit_0_pc_gamg_reuse_interpolation 0 \
-fieldsplit_1_ksp_type preonly \
-fieldsplit_1_pc_type jacobi

- KSP still works fine the first couple of topology optimization iterations but then
stops with "Linear solve did not converge due to DIVERGED_DTOL ..".

My question is whether I should expect the GPU versions of the linear solvers and pre-conditioners
to function exactly as their CPU counterparts (I got this impression from the documentation),
in which case I've probably made some mistake in my own code, or whether there are other/additional
settings or modifications I should use to run on the GPU (an NVIDIA Quadro T2000)?

Kind regards,

Carl-Johan
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20221030/ade6d820/attachment.html>


More information about the petsc-users mailing list