[petsc-users] DMPlex and HDF5 vector ordering
Ataollah Mesgarnejad
amesga1 at tigers.lsu.edu
Fri May 8 16:51:51 CDT 2015
Hi,
I just made our fork public at https://bitbucket.org/mesgarnejad/petsc.
It's working progress and nothing is settled yet but you can use it right
now for saving and loading the global Vectors of the DMPlex you are using.
Simply you should first set the global to natural SF by:
PetscSF G2N;
ierr =
DMPlexCreateGlobalToNaturalPetscSF(distDM,pointSF,seqSection,&G2N);CHKERRQ(ierr);
ierr = DMPlexSetGlobalToNaturalPetscSF(distDM,G2N);CHKERRQ(ierr);
where
- you get the distDM and the pointSF from the DMPlexDistribute()
- seqSection is the data layout for the original DM (I'm trying to fix
this so you wouldn't need to pass this).
Then when saving and loading you push native format to your viewer:
ierr = PetscViewerPushFormat(hdf5Viewer,
PETSC_VIEWER_NATIVE);CHKERRQ(ierr);
You can see an example for writing and loading the coordinates of a DM over
different number of processors in our fork at
src/dm/impls/plex/examples/tests/ex14.c
Again this working progress so it's subject to changes.
Best,
Ata
On Fri, May 8, 2015 at 7:43 AM, Matthew Knepley <knepley at gmail.com> wrote:
> On Fri, May 8, 2015 at 1:48 AM, Justin Chang <jychang48 at gmail.com> wrote:
>
>> I also had the same issue. My current work around is the following.
>>
>> 1) Run the first DMPlex program on one process and write the vector into
>> HDF5.
>>
>> 2) Run the second DMPlex program with any number of processes but do the
>> following:
>>
>> 3) After you create the initial DMPlex on rank 0, but before distributing
>> it, duplicate it and create its petscsection and vector.
>>
>> 4) Load the HDF5 file into that vector. At this point the ordering is the
>> same.
>>
>> 5) Distribute the original DM and save the PetscSF.
>>
>> 6) Call DMPlexDistributeField() to distribute the vector.
>>
>>
>> This will guarantee the right ordering for the second program no matter
>> how many processes it uses. Only drawback is that the first program has to
>> be run in serial. I am also looking for a better way. Matt any thoughts?
>>
>
> Ata and Blaise have a pull request coming that creates a "natural
> ordering" for a Plex, similar to the
> one used by DMDA, so you get output that is invariant to process number.
> It may take until the end
> of the summer to get it fully integrated, but it is very close.
>
> Thanks,
>
> Matt
>
>
>> Thanks,
>> Justin
>>
>>
>> On Friday, May 8, 2015, Adrian Croucher <a.croucher at auckland.ac.nz>
>> wrote:
>>
>>> hi,
>>>
>>> I create a Vec on a DMPlex using DMPlexCreateGlobalVec(), then write it
>>> to HDF5 using PetscViewerHDF5Open() and VecView().
>>>
>>> I then try to read it back in later (in another program, but using the
>>> same DMPlex) using PetscViewerHDF5Open() and VecLoad().
>>>
>>> It looks like the ordering of the final vector entries in the second
>>> program depends on how many processors I use. If they are the same in both
>>> programs, I get the right ordering, but if they aren't, I don't. Is that
>>> expected? If so, is there any way to guarantee the right ordering when I
>>> read the Vec back in?
>>>
>>> - Adrian
>>>
>>> --
>>> Dr Adrian Croucher
>>> Senior Research Fellow
>>> Department of Engineering Science
>>> University of Auckland, New Zealand
>>> email: a.croucher at auckland.ac.nz
>>> tel: +64 (0)9 923 84611
>>>
>>>
>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
--
A. Mesgarnejad, Ph.D.
Postdoctoral Researcher
Center for Computation & Technology
Louisiana State University
2093 Digital Media Center,
Baton Rouge, La 70803
www.mesgarnejad.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20150508/512b7cd9/attachment-0001.html>
More information about the petsc-users
mailing list