[petsc-users] Mixing PETSc Parallelism with Serial MMG3D Workflow
neil liu
liufield at gmail.com
Thu Apr 24 11:07:53 CDT 2025
Thanks a lot, Pierre. It works now.
Another question is, with the present strategy, after the adapting, we will
get a pseudo DM object, which has all information on rank 0 and nothing on
all other ranks.
Then I tried to use DMPlexdistribute to partition it and the
partitioned DMs seem correct. Is it safe to do things like this?
Thanks,
Xiaodong
On Wed, Apr 23, 2025 at 4:33 PM Pierre Jolivet <pierre at joliv.et> wrote:
>
>
> On 23 Apr 2025, at 7:28 PM, neil liu <liufield at gmail.com> wrote:
>
>
>
> *MMG only supports serial execution, whereas ParMMG supports parallel mode
> (although ParMMG is not as robust or mature as MMG).*
> Given this, could you please provide some guidance on how to handle this
> in the code?
>
> Here are my current thoughts; please let know whether it could work as
> a temporary solution.
>
> That could work,
> Pierre
>
> We may only need to make minor modifications in the
> DMAdaptMetric_Mmg_Plex() subroutine. Specifically:
>
> -
>
> Allow all *collective PETSc functions* to run across all ranks as
> usual.
> -
>
> Restrict the *MMG-specific logic* to run *only on rank 0*, since MMG
> is serial-only.
> -
>
> Add a check before MMG is called to ensure that *only rank 0 holds
> mesh cells*, i.e., validate that cEnd - cStart > 0 only on rank 0. If
> more than one rank holds cells, raise a clear warning or error.
>
>
> On Wed, Apr 23, 2025 at 1:11 PM Stefano Zampini <stefano.zampini at gmail.com>
> wrote:
>
>> If mmg does not support parallel communicators, we should handle it
>> internally in the code, always use commself, and raise an error if there
>> are two or more processes in the comm that have cEnd - cStart > 0
>>
>> Il giorno mer 23 apr 2025 alle ore 20:05 neil liu <liufield at gmail.com>
>> ha scritto:
>>
>>> Thanks a lot. Pierre.
>>> Do you have any suggestions to build a real serial DM from this
>>> gatherDM?
>>> I tried several ways, which don't work.
>>> DMClone?
>>>
>>> Thanks,
>>>
>>> On Wed, Apr 23, 2025 at 11:39 AM Pierre Jolivet <pierre at joliv.et> wrote:
>>>
>>>>
>>>>
>>>> On 23 Apr 2025, at 5:31 PM, neil liu <liufield at gmail.com> wrote:
>>>>
>>>> Thanks a lot, Stefano.
>>>> I tried DMPlexGetGatherDM and DMPlexDistributeField. It can give what
>>>> we expected.
>>>> The final gatherDM is listed as follows, rank 0 has all information
>>>> (which is right) while rank 1 has nothing.
>>>> Then I tried to feed this gatherDM into adaptMMG on rank 0 only (it
>>>> seems MMG works better than ParMMG, that is why I want MMG to be tried
>>>> first). But it was stuck at collective petsc functions
>>>> in DMAdaptMetric_Mmg_Plex(). By the way, the present work can work well
>>>> with 1 rank.
>>>>
>>>> Do you have any suggestions ? Build a real serial DM?
>>>>
>>>>
>>>> Yes, you need to change the underlying MPI_Comm as well, but I’m not
>>>> sure if there is any user-facing API for doing this with a one-liner.
>>>>
>>>> Thanks,
>>>> Pierre
>>>>
>>>> Thanks a lot.
>>>> Xiaodong
>>>>
>>>> DM Object: Parallel Mesh 2 MPI processes
>>>> type: plex
>>>> Parallel Mesh in 3 dimensions:
>>>> Number of 0-cells per rank: 56 0
>>>> Number of 1-cells per rank: 289 0
>>>> Number of 2-cells per rank: 452 0
>>>> Number of 3-cells per rank: 216 0
>>>> Labels:
>>>> depth: 4 strata with value/size (0 (56), 1 (289), 2 (452), 3 (216))
>>>> celltype: 4 strata with value/size (0 (56), 1 (289), 3 (452), 6 (216))
>>>> Cell Sets: 2 strata with value/size (29 (152), 30 (64))
>>>> Face Sets: 3 strata with value/size (27 (8), 28 (40), 101 (20))
>>>> Edge Sets: 1 strata with value/size (10 (10))
>>>> Vertex Sets: 5 strata with value/size (27 (2), 28 (6), 29 (2), 101
>>>> (4), 106 (4))
>>>> Field Field_0:
>>>> adjacency FEM
>>>>
>>>>
>>>>
>>>> On Fri, Apr 18, 2025 at 10:09 AM Stefano Zampini <
>>>> stefano.zampini at gmail.com> wrote:
>>>>
>>>>> If you have a vector distributed on the original mesh, then you can
>>>>> use the SF returned by DMPlexGetGatherDM and use that in a call to
>>>>> DMPlexDistributeField
>>>>>
>>>>> Il giorno ven 18 apr 2025 alle ore 17:02 neil liu <liufield at gmail.com>
>>>>> ha scritto:
>>>>>
>>>>>> Dear PETSc developers and users,
>>>>>>
>>>>>> I am currently exploring the integration of MMG3D with PETSc. Since
>>>>>> MMG3D supports only serial execution, I am planning to combine parallel and
>>>>>> serial computing in my workflow. Specifically, after solving the linear
>>>>>> systems in parallel using PETSc:
>>>>>>
>>>>>> 1.
>>>>>>
>>>>>> I intend to use DMPlexGetGatherDM to collect the entire mesh on
>>>>>> the root process for input to MMG3D.
>>>>>> 2.
>>>>>>
>>>>>> Additionally, I plan to gather the error field onto the root
>>>>>> process using VecScatter.
>>>>>>
>>>>>> However, I am concerned that the nth value in the gathered error
>>>>>> vector (step 2) may not correspond to the nth element in the gathered mesh
>>>>>> (step 1). Is this a valid concern?
>>>>>>
>>>>>> Do you have any suggestions or recommended practices for ensuring
>>>>>> correct correspondence between the solution fields and the mesh when
>>>>>> switching from parallel to serial mode?
>>>>>>
>>>>>> Thanks,
>>>>>>
>>>>>> Xiaodong
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Stefano
>>>>>
>>>>
>>>>
>>
>> --
>> Stefano
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20250424/2b970d57/attachment-0001.html>
More information about the petsc-users
mailing list