<div dir="ltr"><div class="gmail_quote"><div dir="ltr">On Tue, Dec 18, 2018 at 8:28 AM Matthew Knepley <<a href="mailto:knepley@gmail.com">knepley@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div dir="ltr">On Tue, Dec 18, 2018 at 6:54 AM Hapla Vaclav <<a href="mailto:vaclav.hapla@erdw.ethz.ch" target="_blank">vaclav.hapla@erdw.ethz.ch</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div style="word-wrap:break-word">
<br>
<div><br>
<blockquote type="cite">
<div>On 17 Dec 2018, at 20:36, Matthew Knepley <<a href="mailto:knepley@gmail.com" target="_blank">knepley@gmail.com</a>> wrote:</div>
<br class="gmail-m_-3225327440995658666gmail-m_-8465814341127712714Apple-interchange-newline">
<div>
<div dir="ltr" style="font-family:Menlo-Regular;font-size:12px;font-style:normal;font-variant-caps:normal;font-weight:normal;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none">
<div class="gmail_quote">
<div dir="ltr">On Mon, Dec 17, 2018 at 12:11 PM Lawrence Mitchell <<a href="mailto:wence@gmx.li" target="_blank">wence@gmx.li</a>> wrote:<br>
</div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
> On 17 Dec 2018, at 11:56, Hapla Vaclav <<a href="mailto:vaclav.hapla@erdw.ethz.ch" target="_blank">vaclav.hapla@erdw.ethz.ch</a>> wrote:<br>
><span class="gmail-m_-3225327440995658666gmail-m_-8465814341127712714Apple-converted-space"> </span><br>
> Matt, great that your reminded this email. I actually completely missed it that time.<br>
><span class="gmail-m_-3225327440995658666gmail-m_-8465814341127712714Apple-converted-space"> </span><br>
>> On 14 Dec 2018, at 19:54, Matthew Knepley via petsc-dev <<a href="mailto:petsc-dev@mcs.anl.gov" target="_blank">petsc-dev@mcs.anl.gov</a>> wrote:<br>
<br>
[...]<br>
<br>
>> I would like:<br>
>><span class="gmail-m_-3225327440995658666gmail-m_-8465814341127712714Apple-converted-space"> </span><br>
>> - To be able to dump the DMPlex, and fields, on N processes<br>
>><span class="gmail-m_-3225327440995658666gmail-m_-8465814341127712714Apple-converted-space"> </span><br>
>> I think the current HDF5 does what you want.<br>
>> <span class="gmail-m_-3225327440995658666gmail-m_-8465814341127712714Apple-converted-space"> </span><br>
>> - To be able to load the DMPlex, and fields, on P processes. In the first instance, to get things going, I am happy if P=1.<br>
>><span class="gmail-m_-3225327440995658666gmail-m_-8465814341127712714Apple-converted-space"> </span><br>
>> I think this also works with arbitrary P, although the testing can be described as extremely thin.<br>
><span class="gmail-m_-3225327440995658666gmail-m_-8465814341127712714Apple-converted-space"> </span><br>
> I think we need to be much more precise here. First off, there are now two HDF5 formats:<br>
> 1) PETSC_VIEWER_HDF5_PETSC - store Plex graph serialization<br>
> 2) PETSC_VIEWER_HDF5_XDMF - store XDMF-compatible representation of vertices and cells<br>
> 3) PETSC_VIEWER_HDF5_VIZ slightly extends 2) with some stuff for visualization, you perhaps understand it better<br>
><span class="gmail-m_-3225327440995658666gmail-m_-8465814341127712714Apple-converted-space"> </span><br>
> PETSC_VIEWER_DEFAULT/PETSC_VIEWER_NATIVE mean store all three above. I think what Lawrence calls Native should be 1).<br>
><span class="gmail-m_-3225327440995658666gmail-m_-8465814341127712714Apple-converted-space"> </span><br>
> The format 1) is currently written in parallel but loaded sequentially<br>
><span class="gmail-m_-3225327440995658666gmail-m_-8465814341127712714Apple-converted-space"> </span><a href="https://bitbucket.org/petsc/petsc/src/fbb1886742ac2bbe3b4d1df09bff9724d3fee060/src/dm/impls/plex/plexhdf5.c#lines-834" rel="noreferrer" target="_blank">https://bitbucket.org/petsc/petsc/src/fbb1886742ac2bbe3b4d1df09bff9724d3fee060/src/dm/impls/plex/plexhdf5.c#lines-834</a><br>
><span class="gmail-m_-3225327440995658666gmail-m_-8465814341127712714Apple-converted-space"> </span><br>
> I don't understand, how it can work correctly for a distributed mesh while the Point SF (connecting partitions) is not stored FWICS. I think there's even no PetscSFView_HDF5(). I will check it more deeply soon.<br>
><span class="gmail-m_-3225327440995658666gmail-m_-8465814341127712714Apple-converted-space"> </span><br>
> The format 2) is for which I implemented parallel DMLoad().<br>
> Unfortunately, I can't declare it bulletproof until we declare parallel DMPlexInterpolate() as 100% working. I did quite some work towards it in<br>
><span class="gmail-m_-3225327440995658666gmail-m_-8465814341127712714Apple-converted-space"> </span><a href="https://bitbucket.org/petsc/petsc/pull-requests/1227/dmplexintepolate-fix-orientation-of-faces/" rel="noreferrer" target="_blank">https://bitbucket.org/petsc/petsc/pull-requests/1227/dmplexintepolate-fix-orientation-of-faces/</a><br>
> but as stated in the PR summary, there are still some examples failing because of the wrong Point SF which is partly fixed in knepley/fix-plex-interpolate-sf but it seems it's not yet finished. Matt, is there any chance you could look at it at some point
in near future?<br>
><span class="gmail-m_-3225327440995658666gmail-m_-8465814341127712714Apple-converted-space"> </span><br>
> I think for Lawrence's purposes, 2) can be used to read the initial mesh file but for checkpointing 1) seems to be better ATM because it dumps everything including interpolated edges & faces, labels and perhaps some more additional information.<br>
<br>
OK, so I guess there are two different things going on here:<br>
<br>
1. Store the data you need to reconstruct a DMPlex<br>
<br>
2. Store the data you need to have a DMPlex viewable via XDMF.<br>
<br>
3. Store the data you need to reconstruct a DMPlex AND have it viewable via XDMF.<br>
<br>
For checkpointing only purposes, I only really need 1; for viz purposes, one only needs 2; ideally, one would not separate viz and checkpointing files if there is sufficient overlap of data (I think there is), which needs 3.<br>
<br>
> I will nevertheless keep on working to improve 2) so that it can store edges & faces & labels in the XDMF-compatible way.<br>
><span class="gmail-m_-3225327440995658666gmail-m_-8465814341127712714Apple-converted-space"> </span><br>
>> <span class="gmail-m_-3225327440995658666gmail-m_-8465814341127712714Apple-converted-space"> </span><br>
>> For dumping, I think I can do DMView(dm) in PETSc "native" format, and that will write out the topology in a global numbering.<br>
>><span class="gmail-m_-3225327440995658666gmail-m_-8465814341127712714Apple-converted-space"> </span><br>
>> I would use HDF5.<br>
>> <span class="gmail-m_-3225327440995658666gmail-m_-8465814341127712714Apple-converted-space"> </span><br>
>> For the field coefficients, I can just VecView(vec). But there does not appear to be any way of saving the Section so that I can actually attach those coefficients to points in the mesh.<br>
>><span class="gmail-m_-3225327440995658666gmail-m_-8465814341127712714Apple-converted-space"> </span><br>
>> Hmm, I will check this right now. If it does not exist, I will write it.<br>
><span class="gmail-m_-3225327440995658666gmail-m_-8465814341127712714Apple-converted-space"> </span><br>
> No, it certainly doesn't exist. There is only ASCII view implemented.<br>
><span class="gmail-m_-3225327440995658666gmail-m_-8465814341127712714Apple-converted-space"> </span><br>
>> <span class="gmail-m_-3225327440995658666gmail-m_-8465814341127712714Apple-converted-space"> </span><br>
>> I can do PetscSectionCreateGlobalSection(section), so that I have a the global numbering for offsets, but presumably for the point numbering, I need to convert the local chart into global point numbers using DMPlexCreatePointNumbering?<br>
>><span class="gmail-m_-3225327440995658666gmail-m_-8465814341127712714Apple-converted-space"> </span><br>
>> No, all Sections use local points. We do not use global point numbers anywhere in Plex.<br>
><span class="gmail-m_-3225327440995658666gmail-m_-8465814341127712714Apple-converted-space"> </span><br>
> True. DMPlex is partition-wise sequential. The only thing which connects the submeshes is the Point SF.<br>
<br>
OK, so I think I misunderstood what the dump format looks like then. For parallel store/load cycle when I go from N to P processes what must I do?<br>
<br>
If I understand correctly the dump on N processes contains:<br>
<br>
For each process, in process-local numbering<br>
<br>
- The DMPlex topology on that process<br>
<br>
Now, given that the only thing that connects these local pieces of the DM together is the point SF, as Vaclav says, it must be the case that a reloadable dump file contains that information.<br>
</blockquote>
<div><br>
</div>
<div>No, the dump contains a completely consistent serial DM. Now I remember why parallel load is not implemented :)</div>
<div>We demand that the dump look identical from any number of procs for all PETSc stuff. Thus we get a global renumbering</div>
<div>and dump with that for all things.</div>
</div>
</div>
</div>
</blockquote>
<div><br>
</div>
<div>Oh I see. I misunderstood this. More clear to me know - DMPlexCreatePointNumbering() is employed in DMPlexView_HDF5_Internal().</div>
<br>
<blockquote type="cite">
<div>
<div dir="ltr" style="font-family:Menlo-Regular;font-size:12px;font-style:normal;font-variant-caps:normal;font-weight:normal;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none">
<div class="gmail_quote">
<div><br>
</div>
<div>Now, when we load in parallel, we need to use the new parallel loading from Michael.</div>
</div>
</div>
</div>
</blockquote>
<div><br>
</div>
<div>What exactly do you mean?</div>
<div>If I remember well <span style="font-family:Menlo-Regular">Michael implemented a parallel MED loader and it uses </span><font face="Menlo-Regular">DMPlexCreateFromCellListParallel() just as my XDMF-HDF5 reader does.</font></div>
<div><font face="Menlo-Regular">Is this function what you mean by "</font><span style="font-family:Menlo-Regular">the new parallel loading"?</span></div></div></div></blockquote><div><br></div><div>Yes exactly. You are doing it right. We just need to extend that. Actually, I think we should probably just store an attribute for interpolated meshes, and interpolate on load. This is much simpler, less storage, and makes everything uniform. What do you think?</div></div></div></blockquote><div><br></div><div>Hmm, the problematic part is the labels. How do we make sure we are labeling the right edge/face?</div><div><br></div><div> Matt</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div> Thanks,</div><div><br></div><div> Matt</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div style="word-wrap:break-word"><div>
<div><span style="font-family:Menlo-Regular">Thanks</span></div>
<div><span style="font-family:Menlo-Regular"><br>
</span></div>
<div><span style="font-family:Menlo-Regular">Vaclav</span></div>
<br>
<blockquote type="cite">
<div>
<div dir="ltr" style="font-family:Menlo-Regular;font-size:12px;font-style:normal;font-variant-caps:normal;font-weight:normal;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none">
<div class="gmail_quote">
<div>I have not yet written that, but it should</div>
<div>be straightforward :) So the below is not really right. We need to call parallel load for the topology. Then we need code that</div>
<div>loads the labels and uses the migration SF to redistribute them, but I think that code already exists for redistribution, so we</div>
<div>just hijack it.</div>
<div><br>
</div>
<div> Matt</div>
<div> </div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
OK, so to dump a field so that we can reload it we must need:<br>
<br>
- topology (in local numbering)<br>
- point SF (connecting the local pieces of the topology together)<br>
- Vector (dofs), presumably in local layout to make things easier<br>
- Section describing the vector layout (local numbering)<br>
<br>
So to load, I do:<br>
<br>
1. load and distribute topology, and construct new point SF (this presumably gives me a "migration SF" that maps from old points to new points<br>
<br>
2. Broadcast the Section over migration SF so that we know how many dofs belong to each point in the new topology<br>
<br>
3. Broadcast the Vec over the migration SF to get the dofs to the right place.<br>
<br>
Whenever I think of this on paper it seems "easy", but then I occasionally try and sit down and do it and immediately get lost, so I am normally missing something.<br>
<br>
What am I missing this time?<br>
<br>
Lawrence<br>
<br>
</blockquote>
</div>
<br clear="all">
<div><br>
</div>
--<span class="gmail-m_-3225327440995658666gmail-m_-8465814341127712714Apple-converted-space"> </span><br>
<div dir="ltr" class="gmail-m_-3225327440995658666gmail-m_-8465814341127712714gmail_signature">
<div dir="ltr">
<div>
<div dir="ltr">
<div>
<div dir="ltr">
<div>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>
-- Norbert Wiener</div>
<div><br>
</div>
<div><a href="http://www.cse.buffalo.edu/~knepley/" target="_blank">https://www.cse.buffalo.edu/~knepley/</a></div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</blockquote>
</div>
<br>
</div>
</blockquote></div><br clear="all"><div><br></div>-- <br><div dir="ltr" class="gmail-m_-3225327440995658666gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener</div><div><br></div><div><a href="http://www.cse.buffalo.edu/~knepley/" target="_blank">https://www.cse.buffalo.edu/~knepley/</a><br></div></div></div></div></div></div></div></div>
</blockquote></div><br clear="all"><div><br></div>-- <br><div dir="ltr" class="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener</div><div><br></div><div><a href="http://www.cse.buffalo.edu/~knepley/" target="_blank">https://www.cse.buffalo.edu/~knepley/</a><br></div></div></div></div></div></div></div></div>