[petsc-users] [GPU] Jacobi preconditioner

Barry Smith bsmith at petsc.dev
Tue Aug 5 07:25:53 CDT 2025


   Then Jacob has to come back and finish the baking.

    Barry


> On Aug 4, 2025, at 11:27 PM, Junchao Zhang <junchao.zhang at gmail.com> wrote:
> 
> CUPM is half baked, and only supports Vec and MATDENSE. 
> 
> --Junchao Zhang
> 
> 
> On Mon, Aug 4, 2025 at 7:31 PM Barry Smith <bsmith at petsc.dev <mailto:bsmith at petsc.dev>> wrote:
>> 
>>   I thought the CUPM code was intended to allow us to have common code between NVIDIA and AMD?
>> 
>> 
>>> On Jul 31, 2025, at 11:05 AM, Junchao Zhang <junchao.zhang at gmail.com <mailto:junchao.zhang at gmail.com>> wrote:
>>> 
>>> What would embarrass me more is to copy the same code to MatGetDiagonal_SeqAIJHIPSPARSE.
>>> 
>>> --Junchao Zhang
>>> 
>>> On Wed, Jul 30, 2025 at 1:34 PM Barry Smith <bsmith at petsc.dev <mailto:bsmith at petsc.dev>> wrote:
>>>> 
>>>>    We absolutely should have a MatGetDiagonal_SeqAIJCUSPARSE(). It's somewhat embarrassing that we don't provide this.
>>>> 
>>>>    I have found some potential code at https://urldefense.us/v3/__https://stackoverflow.com/questions/60311408/how-to-get-the-diagonal-of-a-sparse-matrix-in-cusparse__;!!G_uCfscf7eWS!dkLsir2oikTcYGJKRYgXdA9kTWZzg2MXrL0j0w750Ji-8wfT-NJG6B0cTCgmxasEkr_G8Fb3gHEi7xsu4I1_bao$ 
>>>> 
>>>>    Barry
>>>> 
>>>> 
>>>> 
>>>> 
>>>>> On Jul 28, 2025, at 11:43 AM, Junchao Zhang <junchao.zhang at gmail.com <mailto:junchao.zhang at gmail.com>> wrote:
>>>>> 
>>>>> Yes, MatGetDiagonal_SeqAIJCUSPARSE hasn't been implemented.  petsc/cuda and petsc/kokkos backends are separate code.  
>>>>> If petsc/kokkos meet your needs, then just use them.  For petsc users, we hope it will be just a difference of extra --download-kokkos --download-kokkos-kernels in configuration. 
>>>>> 
>>>>> --Junchao Zhang
>>>>> 
>>>>> 
>>>>> On Mon, Jul 28, 2025 at 2:51 AM LEDAC Pierre <Pierre.LEDAC at cea.fr <mailto:Pierre.LEDAC at cea.fr>> wrote:
>>>>>> Hello all,
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> We are solving with PETSc a linear system updated every time step (constant stencil but coefficients changing).
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> The matrix is preallocated once with MatSetPreallocationCOO() then filled each time step with MatSetValuesCOO() and we use device pointers for coo_i, coo_j, and coefficients values.
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> It is working fine with a GMRES Ksp solver and PC Jacobi but we are surprised to see that every time step, during PCSetUp, MatGetDiagonal_SeqAIJ is called whereas the matrix is on the device. Looking at the API, it seems there is no MatGetDiagonal_SeqAIJCUSPARSE() but a MatGetDiagonal_SeqAIJKOKKOS().
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> Does it mean we should use Kokkos backend in PETSc to have Jacobi preconditioner built directly on device ? Or I am doing something wrong ?
>>>>>> 
>>>>>> NB: Gmres is running well on device.
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> I could use -ksp_reuse_preconditioner to avoid Jacobi being recreated each solve on host but it increases significantly the number of iterations.
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> Thanks,
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> <pastedImage.png>
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> Pierre LEDAC
>>>>>> Commissariat à l’énergie atomique et aux énergies alternatives
>>>>>> Centre de SACLAY
>>>>>> DES/ISAS/DM2S/SGLS/LCAN
>>>>>> Bâtiment 451 – point courrier n°41
>>>>>> F-91191 Gif-sur-Yvette
>>>>>> +33 1 69 08 04 03
>>>>>> +33 6 83 42 05 79
>>>> 
>> 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20250805/19419c3b/attachment.html>


More information about the petsc-users mailing list