[petsc-dev] DMplex reader / viewers

Matthew Knepley knepley at gmail.com
Wed Jan 22 13:09:23 CST 2014


On Tue, Jan 21, 2014 at 11:01 AM, Matthew Knepley <knepley at gmail.com> wrote:

> On Tue, Jan 21, 2014 at 9:30 AM, Blaise A Bourdin <bourdin at lsu.edu> wrote:
>
>> Hi,
>>
>> It looks like DMplex is steadily gaining maturity but I/O is lagging
>> behind. As far as I understand, right now, PETSc can _read_ a mesh in
>> exodus format, and  write binary VTS format, but many issues remain, IMHO:
>>    - The exodus reader relies on a hard-coded nodeset named “marker”.
>> Generating such a nodeset is not trivial
>>      (at least not for complex meshes generated with Cubit / Trelis).
>>
>
> I will fix this right away. I will put in some registration mechanism for
> labels, and we can iterate.
>

I just looked at the code again, and this is not what happens. The exodus
reader reads all cell, vertex, and side
sets. What you are remembering is that when I use mesh generators
(Triangle, TetGen) I call the boundary
markers from their output "marker". Thus there should be no problem. I will
start working on an exodus writer.

I think we should have an exodus writer, since the format handles the
metadata well, and then have our own
HDF5 format since we can write in parallel, use Xdmf, and it will be
extensible by us to handle different discretizations.

   Matt



>    - Reading from or writing to exodus files is not supported.
>>
>
> Yes, I think this is the best target. It should be similar to writing HDF5
> that we do for PyLith.
>
>    Matt
>
>
>>    - The VTS viewer only allows to read and write _all_ fields in a DM.
>> This may be overkill if one only
>>      wants to read boundary values, for instance.
>>    - The VTS viewer loses all informations on exodus nodesets and cell
>> sets. These may have some significance
>>      and may be required to exploit the output of a computations.
>>    - VTS seems to have a concept of “blocks”. My understanding is that
>> the parallel VTS viewer uses blocks to
>>      save subdomains, and that continuity of piecewise linear fields
>> across subdomain boundaries is lost.
>>      It is not entirely clear to me if with this layout, it would be
>> possible to reopen a file with a
>>      different processor count.
>>
>> I can dedicate some resources to improving DMplex I/O. Perhaps we can
>> start a discussion by listing the desired features such readers / writers
>> should have. I will pitch in by listing what matters to me:
>>    - A well documented and adopted file format that most post-processors
>> / visualization tools can use
>>    - Ability to read / write individual fields
>>    - Preserve _all_ information from the exodus file (node / side / cell
>> sets), do not lose continuity of fields
>>      across subdomain boundaries.
>>    - Ability to reopen file on a different cpu count
>>    - Support for higher order elements
>>
>> Am I missing something? If not, we can follow up with discussion on
>> formats and implementation.
>>
>> Blaise
>>
>> --
>> Department of Mathematics and Center for Computation & Technology
>> Louisiana State University, Baton Rouge, LA 70803, USA
>> Tel. +1 (225) 578 1612, Fax  +1 (225) 578 4276
>> http://www.math.lsu.edu/~bourdin
>>
>>
>>
>>
>>
>>
>>
>>
>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>



-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20140122/995ac3e4/attachment.html>


More information about the petsc-dev mailing list