[petsc-users] Sparse solvers for distributed GPU matrices/vectors arising from 3D poisson eq
Sajid Ali Syed
sasyed at fnal.gov
Fri Feb 4 11:09:01 CST 2022
Hi PETSc-developers,
Could the linear solver table (at https://petsc.org/main/overview/linear_solve_table/) be updated with information regarding direct solvers that work on mpiaijkokkos/kokkos (or mpiaijcusparse/cuda) matrix/vector types?
The use case for this solver would be to repeatedly invert the same matrix so any solver that is able to perform the SpTRSV phase entirely using GPU matrices/vectors would be helpful (even if the initial factorization is performed using CPU matrices/vectors with GPU offload), this functionality of course being the corresponding distributed memory counterpart to the current device-solve capabilities of the seqaijkokkos matrix type (provided by the kokkos-kernel SpTRSV routines). The system arises from a (7-pt) finite difference discretization of the 3D Poisson equation with a mesh of 256x256x1024 (likely necessitate using multiple GPUs) with dirichlet boundary conditions.
The recent article on PETScSF (arXiv:2102.13018) describes an asynchronous CG solver that works well on communication bound multi-GPU systems. Is this solver available now and can it be combined with GAMG/hypre preconditioning ?
Summary of Sparse Linear Solvers Available In PETSc — PETSc v3.16.2-540-g1213a6437a documentation<https://petsc.org/main/overview/linear_solve_table/>
Last updated on 2022-01-01T03:38:46-0600 (v3.16.2-540-g1213a6437a).
petsc.org
Thank You,
Sajid Ali (he/him) | Research Associate
Scientific Computing Division
Fermi National Accelerator Laboratory
s-sajid-ali.github.io<http://s-sajid-ali.github.io>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20220204/fdd783c3/attachment-0001.html>
More information about the petsc-users
mailing list