<div dir="ltr">Hi,<div><br></div><div>I just made our fork public at <a href="https://bitbucket.org/mesgarnejad/petsc">https://bitbucket.org/mesgarnejad/petsc</a>. It's working progress and nothing is settled yet but you can use it right now for saving and loading the global Vectors of the DMPlex you are using.</div><div><br></div><div>Simply you should first set the global to natural SF by:</div><div><br></div><div><div> PetscSF G2N;</div><div> ierr = DMPlexCreateGlobalToNaturalPetscSF(distDM,pointSF,seqSection,&G2N);CHKERRQ(ierr);</div><div> ierr = DMPlexSetGlobalToNaturalPetscSF(distDM,G2N);CHKERRQ(ierr);</div></div><div><br></div><div>where </div><div><ul><li> you get the distDM and the pointSF from the DMPlexDistribute() </li><li>seqSection is the data layout for the original DM (I'm trying to fix this so you wouldn't need to pass this).<br></li></ul></div><div><br></div><div>Then when saving and loading you push native format to your viewer:</div><div><br></div><div> ierr = PetscViewerPushFormat(hdf5Viewer, PETSC_VIEWER_NATIVE);CHKERRQ(ierr);<br></div><div><br></div><div><br></div><div>You can see an example for writing and loading the coordinates of a DM over different number of processors in our fork at src/dm/impls/plex/examples/tests/ex14.c</div><div><br></div><div>Again this working progress so it's subject to changes.</div><div><br></div><div>Best,</div><div>Ata</div><div><br></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Fri, May 8, 2015 at 7:43 AM, Matthew Knepley <span dir="ltr"><<a href="mailto:knepley@gmail.com" target="_blank">knepley@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><span class="">On Fri, May 8, 2015 at 1:48 AM, Justin Chang <span dir="ltr"><<a href="mailto:jychang48@gmail.com" target="_blank">jychang48@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">I also had the same issue. My current work around is the following.<div><br></div><div>1) Run the first DMPlex program on one process and write the vector into HDF5.</div><div><br></div><div>2) Run the second DMPlex program with any number of processes but do the following:</div><div><br></div><div>3) After you create the initial DMPlex on rank 0, but before distributing it, duplicate it and create its petscsection and vector.</div><div><br></div><div>4) Load the HDF5 file into that vector. At this point the ordering is the same.</div><div><br></div><div>5) Distribute the original DM and save the PetscSF.</div><div><br></div><div>6) Call DMPlexDistributeField() to distribute the vector.</div><div><br></div><div><br></div><div>This will guarantee the right ordering for the second program no matter how many processes it uses. Only drawback is that the first program has to be run in serial. I am also looking for a better way. Matt any thoughts?</div></blockquote><div><br></div></span><div>Ata and Blaise have a pull request coming that creates a "natural ordering" for a Plex, similar to the</div><div>one used by DMDA, so you get output that is invariant to process number. It may take until the end</div><div>of the summer to get it fully integrated, but it is very close.</div><div><br></div><div> Thanks,</div><div><br></div><div> Matt</div><span class=""><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div>Thanks,</div><div>Justin<div><div><br><br>On Friday, May 8, 2015, Adrian Croucher <<a>a.croucher@auckland.ac.nz</a>> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">hi,<br>
<br>
I create a Vec on a DMPlex using DMPlexCreateGlobalVec(), then write it to HDF5 using PetscViewerHDF5Open() and VecView().<br>
<br>
I then try to read it back in later (in another program, but using the same DMPlex) using PetscViewerHDF5Open() and VecLoad().<br>
<br>
It looks like the ordering of the final vector entries in the second program depends on how many processors I use. If they are the same in both programs, I get the right ordering, but if they aren't, I don't. Is that expected? If so, is there any way to guarantee the right ordering when I read the Vec back in?<br>
<br>
- Adrian<br>
<br>
-- <br>
Dr Adrian Croucher<br>
Senior Research Fellow<br>
Department of Engineering Science<br>
University of Auckland, New Zealand<br>
email: <a>a.croucher@auckland.ac.nz</a><br>
tel: +64 (0)9 923 84611<br>
<br>
</blockquote></div></div></div>
</blockquote></span></div><span class="HOEnZb"><font color="#888888"><br><br clear="all"><div><br></div>-- <br><div>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener</div>
</font></span></div></div>
</blockquote></div><br><br clear="all"><div><br></div>-- <br><div class="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr">A. Mesgarnejad, Ph.D.</div><div dir="ltr">Postdoctoral Researcher<br>Center for Computation & Technology<br>Louisiana State University<br>2093 Digital Media Center,<br>Baton Rouge, La 70803<div><a href="http://www.mesgarnejad.com" target="_blank">www.mesgarnejad.com</a></div></div></div></div></div></div></div>
</div>