[petsc-users] Parallel IO of DMPlex

Justin Chang jychang48 at gmail.com
Mon Jun 29 03:20:30 CDT 2015


Given the format I described in the previous post, I have the following
mesh:

8
9
3
2
0 1 3
1 4 3
1 2 4
2 5 4
3 4 6
4 7 6
4 5 7
5 8 7
0.0 0.0
0.5 0.0
1.0 0.0
0.0 0.5
0.5 0.5
1.0 0.5
0.0 1.0
0.5 1.0
1.0 1.0
8
0 1 2 3 5 6 7 8

If I If i want to partition across 2 MPI processes, I would:

a) have all processes open the file and read the first four entires.

b) declare that rank 0 holds cells 0-3 and rank 1 holds cell 4-7. A special
algorithm would be needed to handle cases where the number of cells is not
divisible by the number of MPI processes

c) rank 0 stores 0 1 3 1 4 3 1 2 4 2 5 4, rank 1 reads 3 4 6 4 7 6 4 5 7 5
8 7

d) traverse the respective arrays and read in the coordinates associated
with the vertices. I am guessing this part is "tricky" because I cannot
simply do a naive partitioning of the coordinates the way I could with the
connectivity?

e) Wont worry about exterior nodes for now

f) With the above I invoke DMPlexCreateFromDAG(), (of course making sure
that the numbering is of right format)

g) When creating a SF for shared boundary, does this mean determining which
nodes are ghosted and which ones are local? For a structured looking grid
this would be easy since I could employ some sort of round-robin scheme.

h) Does the DMPlexDistribute() also take care of the repartitioning and
redistribution?

Thanks,
Justin

On Sat, Jun 27, 2015 at 6:01 AM, Matthew Knepley <knepley at gmail.com> wrote:

> On Sat, Jun 27, 2015 at 2:05 AM, Justin Chang <jychang48 at gmail.com> wrote:
>
>> Hi everyone,
>>
>> I see that parallel IO of various input mesh formats for DMPlex is not
>> currently supported. However, is there a way to read a custom mesh file in
>> parallel?
>>
>> For instance, I have a binary data file formatted like this:
>>
>> <No. of elements>
>> <No. of vertices>
>> <No. of nodes per element>
>> <Spatial dimension>
>> <Connectivity array ...
>> ....>
>> <Coordinates array ...
>> ....>
>> <No. of exterior nodes>
>> <List of exterior nodes ...
>> ....>
>>
>> Reading this will allow me to create a DMPlex with DMPlexCreateFromDAG().
>> Currently, only the root process reads this binary mesh file and creates
>> the DMPlex whereas the other processes create an empty DMPlex. Then all
>> processes invoke DMPlexDistribute(). This one-to-all distribution seems to
>> be a bottleneck, and I want to know if it's possible to have each process
>> read a local portion of the connectivity and coordinates arrays and let
>> DMPlex/METIS/ParMETIS handle the load balancing and redistribution.
>> Intuitively this would be easy to write, but again I want to know how to do
>> this through leveraging the functions and routines within DMPlex.
>>
>
> This is on our agenda for the fall, but I can describe the process:
>
>   a) Do a naive partition of the cells (easy)
>
>   b) Read the connectivity for "your" cells (easy)
>
>   c) Read "your" coordinates (tricky)
>
>   d) Read "your" exterior (tricky)
>
>   e) Create local DAGs (easy)
>
>   f) Create SF for shared boundary (hard)
>
>   g) Repartition and redistribute (easy)
>
> You could start writing this for your format and we could help. I probably
> will not get to the generic one until late in the year.
>
>   Thanks,
>
>     Matt
>
> Thanks,
>> Justin
>>
>
>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20150629/921bb9ab/attachment.html>


More information about the petsc-users mailing list