[petsc-users] DMPlexCreateFromDAG in parallel

Matthew Knepley knepley at gmail.com
Tue Oct 1 12:32:06 CDT 2019


On Tue, Oct 1, 2019 at 12:25 PM Asitav Mishra <asitav at gmail.com> wrote:

> Matt,
>
> Thanks for the example, it make it very clear. I understand this example
> assumes only one process (here Process 1) 'owns' the shared vertices  on
> the shared face. Specifically, for Process 1 can I also have the SF:
> numRoots    = 4
> numLeaves  = 2
> local indices = {1, 2}
> remote indices = {{3,0}, {2,0}}
> ?
>

You could choose to have Process 0 own the vertices instead of Process 1,
but you cannot specify the info on both Processes since SF
takes one-sided information.


> Also, I'm still not sure how DMPlexCreateFromDAG uses PetscSF. For ex, is
> the DM creation after SF creation for each process or before? Are there any
> examples available in Petsc distributions?
>

No, we don't have any examples of creating things in parallel by hand. The
SF would be built after DMPlexCreateFromDAG()
and you set it using DMPlexSetPointSF().

  Thanks,

    Matt


> Thanks,
> Asitav
>
>
>
> On Tue, Oct 1, 2019 at 11:54 AM Matthew Knepley <knepley at gmail.com> wrote:
>
>> On Tue, Oct 1, 2019 at 10:08 AM Asitav Mishra <asitav at gmail.com> wrote:
>>
>>> Matt,
>>>
>>> Thank you very much for your prompt response. Between the two solutions
>>> suggested by you, solution (1) would be tough since it would be difficult
>>> to ensure non-repeating vertices between processors.
>>> Would you be able to provide an example for the solution (2), that is
>>> using CreateFromDAG() on multiple processes.
>>> For example, how can we set up PetscSF for the simple case of two
>>> triangles (cells: 0 and 1) sharing a face (with vertices: 1, 2) with cell 0
>>> in proc 0 and cell 1 in proc 1?
>>>
>>>         2
>>>       / | \
>>>     /   |   \
>>>    /    |    \
>>>  0  0 | 1  3
>>>    \    |    /
>>>      \  |   /
>>>       \ | /
>>>         1
>>>
>>
>> Okay, on 2 processes, you would have something like:
>>
>> Process 0      Process 1
>>      3      <->          1
>>     / |                     | \
>>    /  |                     |  \
>>  /    |                     |   \
>> 1 0 |                     | 0  3
>>  \    |                     |    /
>>   \   |                     |  /
>>    \  |                     | /
>>      2     <->          2
>>
>> where we number all points sequentially. We will refer to points as (p,
>> r) where p is the local point number and r is the rank.
>> So you need to match up
>>
>>     (3, 0) <-> (1, 1)
>>     (2, 0) <-> (2, 1)
>>
>> Lets say that Process 1 owns both points. Then making the SF for Process
>> 0 is
>>
>>   numRoots        = 4
>>   numLeaves      = 2
>>   local indices     = {2, 3}
>>   remote indices = {{2, 1}, {1, 1}}
>>
>> The SF for Process 1 is
>>
>>   numRoots   = 4
>>   numLeaves = 0
>>
>>   Thanks,
>>
>>     Matt
>>
>> Any help would be great. Thanks again.
>>> Asitav
>>>
>>> On Sat, Sep 28, 2019 at 1:20 AM Matthew Knepley <knepley at gmail.com>
>>> wrote:
>>>
>>>> On Wed, Sep 25, 2019 at 5:56 PM Asitav Mishra via petsc-users <
>>>> petsc-users at mcs.anl.gov> wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> I have a native distributed mesh graph across multiple processors,
>>>>> using which I would want to create DMPlex mesh using DMPlexCreateFromDAG. I
>>>>> see in Petsc plex/examples that DMPlexCreateFromDAG creates DM only from
>>>>> master processor and then the DM is distributed across multiple
>>>>> (one-to-many) processors. My question is: is it possible to create DAG
>>>>> locally in each processor and then build the global DM? If yes, are there
>>>>> any such examples?
>>>>>
>>>>
>>>> 1) If you do not mind us redistributing the mesh on input, then you can
>>>> probably do what you want using
>>>>
>>>>
>>>> https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DMPLEX/DMPlexCreateFromCellListParallel.html
>>>>
>>>> Note that the input to this function wants a unique set of vertices
>>>> from each process, so each vertex must come from only one processes.
>>>>
>>>>  2) If that does not work, you can certainly call CreateFromDAG() on
>>>> multiple processes. However, then you must manually create the PetscSF
>>>>      which describes how the mesh is connected in parallel. If this is
>>>> what you need to do, I can give you instructions but at that point we should
>>>>      probably make an example that does it.
>>>>
>>>>   Thanks,
>>>>
>>>>     Matt
>>>>
>>>>
>>>>> Best,
>>>>> Asitav
>>>>>
>>>>
>>>>
>>>> --
>>>> What most experimenters take for granted before they begin their
>>>> experiments is infinitely more interesting than any results to which their
>>>> experiments lead.
>>>> -- Norbert Wiener
>>>>
>>>> https://www.cse.buffalo.edu/~knepley/
>>>> <http://www.cse.buffalo.edu/~knepley/>
>>>>
>>>
>>>
>>> --
>>> Asitav Mishra, PhD
>>> Research Engineer II, NIA
>>>
>>
>>
>> --
>> What most experimenters take for granted before they begin their
>> experiments is infinitely more interesting than any results to which their
>> experiments lead.
>> -- Norbert Wiener
>>
>> https://www.cse.buffalo.edu/~knepley/
>> <http://www.cse.buffalo.edu/~knepley/>
>>
>
>
> --
> Asitav Mishra, PhD
> Research Engineer II, NIA
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20191001/7b9d94e6/attachment.html>


More information about the petsc-users mailing list