[petsc-dev] HDF5 usage in PETSc without MPI? WTF? Totally broken?
Matthew Knepley
knepley at gmail.com
Wed Mar 23 22:44:23 CDT 2016
On Wed, Mar 23, 2016 at 10:03 PM, Barry Smith <bsmith at mcs.anl.gov> wrote:
>
> > On Mar 23, 2016, at 7:35 PM, Matthew Knepley <knepley at gmail.com> wrote:
> >
> > On Wed, Mar 23, 2016 at 2:28 PM, Barry Smith <bsmith at mcs.anl.gov> wrote:
> >
> > Jed and Matt and anyone else who understands the HDF5 viewer
> >
> > No one has answered this. If I get no response I am going to
> assume that PETSc requires HDF5 built with MPI and remove the #if defs in
> the code.
> >
> > Are you sure that the Vec code fails when HDF5 is serial?
>
> How would I verify if the if the resulting hdf5 file is good or not?
Save and load a parallel vector with a non-MPI HDF5?
Matt
>
> Barry
>
> > I don't know what happens when multiple
> > procs use H5Lexists() and friends. Maybe its transactional on the file.
> It does use
> >
> > PetscStackCallHDF5Return(plist_id,H5Pcreate,(H5P_DATASET_XFER));
> > #if defined(PETSC_HAVE_H5PSET_FAPL_MPIO)
> > PetscStackCallHDF5(H5Pset_dxpl_mpio,(plist_id, H5FD_MPIO_COLLECTIVE));
> > #endif
> >
> > If it does require MPI, then fine, take out the #ifdefs.
> >
> > Matt
> >
> > Barry
> >
> > > On Mar 18, 2016, at 2:50 PM, Barry Smith <bsmith at mcs.anl.gov> wrote:
> > >
> > >
> > > I am confused about the usage of HDF5 from PETSc.
> > >
> > > In hdf5.py
> > >
> > > def configureLibrary(self):
> > > if self.libraries.check(self.dlib, 'H5Pset_fapl_mpio'):
> > > self.addDefine('HAVE_H5PSET_FAPL_MPIO', 1)
> > > return
> > >
> > > So PETSc does not require HDF5 to have been built using MPI (for
> example if it was built by someone else without MPI.)
> > >
> > > In PetscErrorCode PetscViewerFileSetName_HDF5(PetscViewer viewer,
> const char name[])
> > >
> > > #if defined(PETSC_HAVE_H5PSET_FAPL_MPIO)
> > > PetscStackCallHDF5(H5Pset_fapl_mpio,(plist_id,
> PetscObjectComm((PetscObject)viewer), info));
> > > #endif
> > >
> > > so it only sets collective IO if the symbol was found and hence HDF5
> was built for MPI
> > >
> > > But in places like VecView_MPI_HDF5(Vec xin, PetscViewer viewer)
> > >
> > > it uses MPI as if it was collective? Though it might not be because
> hdf5 could have been built without MPI
> > >
> > > So if I build PETSc with a non-MPI hdf5 and yet use the hdf5 viewer
> in parallel; do the generated hdf5 files contain garbage?
> > >
> > > It seems to me we need to have hdf5.py REQUIRE the existence of
> H5Pset_fapl_mpio?
> > >
> > > Barry
> > >
> > >
> > >
> > >
> > >
> >
> >
> >
> >
> > --
> > What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> > -- Norbert Wiener
>
>
--
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20160323/eb55211e/attachment.html>
More information about the petsc-dev
mailing list