[petsc-dev] DMplex: Natural Ordering and subDM

Blaise A Bourdin bourdin at lsu.edu
Mon Nov 27 22:59:42 CST 2017


There may be good reasons to want to read / write in a given ordering:  post-processing an existing computation, applying non trivial boundary conditions that need to be computed separately, or restarting a computation on a different number of processors.
Also, Exodus has restriction on cell ordering (cells in an element block must be numbered sequentially) so the distributed cell ordering may not be acceptable.
Exodus 6 introduced element sets which are free of this limitation, but as far as I know, no mesh generator or post processing deals with elements sets.

I ended up inverting the migration SF, broadcast the distributed section back to the original mesh, then generate SFnatural using DMPlexCreateGlobalToNaturalSF. It is ugly but done only once.

Still about exodus, I now have parallel (MPIIO through parallel netcdf) I/O working for nodal (linear and quadratic Lagrange elements) and zonal fields (in exodus jargon) in natural and standard ordering. I can have pull request and documented examples and tests ready in a few days.

Blaise


On Nov 27, 2017, at 10:21 PM, Jed Brown <jed at jedbrown.org<mailto:jed at jedbrown.org>> wrote:

Matthew Knepley <knepley at gmail.com<mailto:knepley at gmail.com>> writes:

On Mon, Nov 27, 2017 at 9:24 PM, Jed Brown <jed at jedbrown.org<mailto:jed at jedbrown.org>> wrote:

Matthew Knepley <knepley at gmail.com<mailto:knepley at gmail.com>> writes:

On Mon, Nov 27, 2017 at 8:08 PM, Jed Brown <jed at jedbrown.org<mailto:jed at jedbrown.org>> wrote:

I don't know the answer to your question (Matt?), but do you really need
to reorder the entire mesh or would it be sufficient to label your
points with their original numbering?


Maybe I am wrong, but I think it amounts to the same thing. If we are
going
to output things in parallel,
we would need to communicate to he writing process, which this
essentially
does.

Writing a label doesn't require redistribution of the mesh.  It's possible
to do parallel IO.


My understanding is that 3rd party programs want the mesh in the
original order, so we want it ordered in the HDF5 file in the original
order. You could I guess write it in the order, but it seems messy to
write stuff all over the place in the file. Is that what you mean?

What is the third-party program doing?  It might be easier for it to
apply the permutation than for PETSc to redistribute in order to write
the file in a possibly poor ordering.

--
Department of Mathematics and Center for Computation & Technology
Louisiana State University, Baton Rouge, LA 70803, USA
Tel. +1 (225) 578 1612, Fax  +1 (225) 578 4276 http://www.math.lsu.edu/~bourdin







-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20171128/cc7db1d0/attachment.html>


More information about the petsc-dev mailing list