<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">On Tue, Jan 21, 2014 at 9:30 AM, Blaise A Bourdin <span dir="ltr"><<a href="mailto:bourdin@lsu.edu" target="_blank">bourdin@lsu.edu</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi,<br>
<br>
It looks like DMplex is steadily gaining maturity but I/O is lagging behind. As far as I understand, right now, PETSc can _read_ a mesh in exodus format, and write binary VTS format, but many issues remain, IMHO:<br>
- The exodus reader relies on a hard-coded nodeset named “marker”. Generating such a nodeset is not trivial<br>
(at least not for complex meshes generated with Cubit / Trelis).<br></blockquote><div><br></div><div>I will fix this right away. I will put in some registration mechanism for labels, and we can iterate.</div><div><br>
</div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"> - Reading from or writing to exodus files is not supported.<br></blockquote><div><br></div><div>Yes, I think this is the best target. It should be similar to writing HDF5 that we do for PyLith.</div>
<div><br></div><div> Matt</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
- The VTS viewer only allows to read and write _all_ fields in a DM. This may be overkill if one only<br>
wants to read boundary values, for instance.<br>
- The VTS viewer loses all informations on exodus nodesets and cell sets. These may have some significance<br>
and may be required to exploit the output of a computations.<br>
- VTS seems to have a concept of “blocks”. My understanding is that the parallel VTS viewer uses blocks to<br>
save subdomains, and that continuity of piecewise linear fields across subdomain boundaries is lost.<br>
It is not entirely clear to me if with this layout, it would be possible to reopen a file with a<br>
different processor count.<br>
<br>
I can dedicate some resources to improving DMplex I/O. Perhaps we can start a discussion by listing the desired features such readers / writers should have. I will pitch in by listing what matters to me:<br>
- A well documented and adopted file format that most post-processors / visualization tools can use<br>
- Ability to read / write individual fields<br>
- Preserve _all_ information from the exodus file (node / side / cell sets), do not lose continuity of fields<br>
across subdomain boundaries.<br>
- Ability to reopen file on a different cpu count<br>
- Support for higher order elements<br>
<br>
Am I missing something? If not, we can follow up with discussion on formats and implementation.<br>
<span class="HOEnZb"><font color="#888888"><br>
Blaise<br>
<br>
--<br>
Department of Mathematics and Center for Computation & Technology<br>
Louisiana State University, Baton Rouge, LA 70803, USA<br>
Tel. <a href="tel:%2B1%20%28225%29%20578%201612" value="+12255781612">+1 (225) 578 1612</a>, Fax <a href="tel:%2B1%20%28225%29%20578%204276" value="+12255784276">+1 (225) 578 4276</a> <a href="http://www.math.lsu.edu/~bourdin" target="_blank">http://www.math.lsu.edu/~bourdin</a><br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
</font></span></blockquote></div><br><br clear="all"><div><br></div>-- <br>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>
-- Norbert Wiener
</div></div>