[petsc-users] Question about the memory usage for BDDC preconditioner.

Stefano Zampini stefano.zampini at gmail.com
Wed Aug 14 10:54:17 CDT 2024


Ok, the problem is that the default algorithm for detecting the connected
components of the interface finds a lot of disconnected dofs.
What discretization is this? Nedelec elements? Can you try using -pc_
*bddc_use_lo*cal_mat_graph 0?
Also, you are using -pc_bddc_monolithic, but you only have one field. That
flag aggregates different fields, but you only have one.
Note that with Nedelec elements, you need a special change of basis for
BDDC to work, see e.g. https://urldefense.us/v3/__https://www.osti.gov/servlets/purl/1377770__;!!G_uCfscf7eWS!Z94Qs8Q7RYEdhbAbvkaNorzlyoN4UH_ttW0EmR6d-NKweo4S35ELp-_Y60aJAAE1vzgZpof2VQYVxX9Xm1kM2vwiioZBFlo$ 

Il giorno mer 14 ago 2024 alle ore 05:15 neil liu <liufield at gmail.com> ha
scritto:

> Hi, Stefano,
>
> Please see the attached for the smaller case(successful with BDDC).
> and the Error_largerMesh shows the error with the large mesh using petsc
> debug mode.
>
> Thanks a lot,
>
> Xiaodong
>
>
> On Tue, Aug 13, 2024 at 5:47 PM Stefano Zampini <stefano.zampini at gmail.com>
> wrote:
>
>> can you run the same options and add "-ksp_view -pc_bddc_check_level 1"
>> for the smaller case? Also, can you send the full stack trace of the
>> out-of-memory error using a debug version of PETSc?
>> A note aside: you should not need pc_bddc_use_vertices (which is on by
>> default)
>>
>> Il giorno mar 13 ago 2024 alle ore 23:17 neil liu <liufield at gmail.com>
>> ha scritto:
>>
>>> Dear Petsc developers,
>>>
>>> I am testing PCBDDC for my vector based FEM solver(complex system). It
>>> can work well on a coarse mesh(tetrahedra cell #: 6,108; dof # : 39,596).
>>> Then I tried a finer mesh (tetrahedra cell #: 32,036; dof # : 206,362). It
>>> seems ASM can work well with
>>>
>>> petsc-3.21.1/petsc/arch-linux-c-opt/bin/mpirun -n 4 ./app -pc_type asm
>>> -ksp_converged_reason -ksp_monitor  -ksp_gmres_restart 100 -ksp_rtol 1e-4
>>> -pc_asm_overalp 4 -sub_pc_type ilu -malloc_view
>>>
>>> while  PCBDDC eats up the memory (61 GB) when I tried
>>>
>>> petsc-3.21.1/petsc/arch-linux-c-opt/bin/mpirun -n 4 ./app -pc_type bddc
>>> -pc_bddc_coarse_redundant_pc_type ilu  -pc_bddc_use_vertices
>>> -ksp_error_if_not_converged -mat_type is -ksp_monitor -ksp_rtol 1e-8
>>> -ksp_gmres_restart 30 -ksp_view -malloc_view -pc_bddc_monolithic
>>> -pc_bddc_neumann_pc_type ilu -pc_bddc_dirichlet_pc_type ilu
>>>
>>> The following errors with BDDC came out. The memory usage for PCBDDC
>>> (different from PCASM) is also listed (I am assuming the unit is Bytes,
>>> right?). *Although the BDDC requires more memory, it still seems
>>> normal, right? *
>>>
>>> [0]PETSC ERROR: --------------------- Error Message
>>> --------------------------------------------------------------
>>> [0]PETSC ERROR: Out of memory. This could be due to allocating
>>> [0]PETSC ERROR: too large an object or bleeding by not properly
>>> [0]PETSC ERROR: destroying unneeded objects.
>>> [0] Maximum memory PetscMalloc()ed 30829727808 maximum size of entire
>>> process 16899194880
>>> [0] Memory usage sorted by function
>>> ....
>>> *[0] 1 240 PCBDDCGraphCreate()*
>>> *[0] 1 3551136 PCBDDCGraphInit()*
>>> *[0] 2045 32720 PCBDDCGraphSetUp()*
>>> *[0] 2 8345696 PCBDDCSetLocalAdjacencyGraph_BDDC()*
>>> *[0] 1 784 PCCreate()*
>>> *[0] 1 1216 PCCreate_BDDC()*
>>> ....
>>>
>>> Thanks for your help.
>>>
>>> Xiaodong
>>>
>>>
>>>
>>
>> --
>> Stefano
>>
>

-- 
Stefano
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20240814/25cf342e/attachment.html>


More information about the petsc-users mailing list