[petsc-users] gamg out of memory with gpu

Edoardo Centofanti edoardo.centofanti01 at universitadipavia.it
Mon Dec 26 09:29:05 CST 2022


Thank you for your answer. Can you provide me the full path of the example
you have in mind? The one I found does not seem to exploit the algebraic
multigrid, but just the geometric one.

Thanks,
Edoardo

Il giorno lun 26 dic 2022 alle ore 15:39 Matthew Knepley <knepley at gmail.com>
ha scritto:

> On Mon, Dec 26, 2022 at 4:41 AM Edoardo Centofanti <
> edoardo.centofanti01 at universitadipavia.it> wrote:
>
>> Hi PETSc Users,
>>
>> I am experimenting some issues with the GAMG precondtioner when used with
>> GPU.
>> In particular, it seems to go out of memory very easily (around 5000
>> dofs are enough to make it throw the "[0]PETSC ERROR: cuda error 2
>> (cudaErrorMemoryAllocation) : out of memory" error).
>> I have these issues both with single and multiple GPUs (on the same or on
>> different nodes). The exact same problems work like a charm with HYPRE
>> BoomerAMG on GPUs.
>> With both preconditioners I exploit the device acceleration by giving the
>> usual command line options "-dm_vec_type cuda" and "-dm_mat_type
>> aijcusparse" (I am working with structured meshes). My PETSc version is
>> 3.17.
>>
>> Is this a known issue of the GAMG preconditioner?
>>
>
> No. Can you get it to do this with a PETSc example? Say SNES ex5?
>
>   Thanks,
>
>      Matt
>
>
>> Thank you in advance,
>> Edoardo
>>
>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
> https://www.cse.buffalo.edu/~knepley/
> <http://www.cse.buffalo.edu/~knepley/>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20221226/252a3334/attachment.html>


More information about the petsc-users mailing list