[petsc-dev] HDF5 usage in PETSc without MPI? WTF? Totally broken?
Barry Smith
bsmith at mcs.anl.gov
Fri Mar 18 14:50:58 CDT 2016
I am confused about the usage of HDF5 from PETSc.
In hdf5.py
def configureLibrary(self):
if self.libraries.check(self.dlib, 'H5Pset_fapl_mpio'):
self.addDefine('HAVE_H5PSET_FAPL_MPIO', 1)
return
So PETSc does not require HDF5 to have been built using MPI (for example if it was built by someone else without MPI.)
In PetscErrorCode PetscViewerFileSetName_HDF5(PetscViewer viewer, const char name[])
#if defined(PETSC_HAVE_H5PSET_FAPL_MPIO)
PetscStackCallHDF5(H5Pset_fapl_mpio,(plist_id, PetscObjectComm((PetscObject)viewer), info));
#endif
so it only sets collective IO if the symbol was found and hence HDF5 was built for MPI
But in places like VecView_MPI_HDF5(Vec xin, PetscViewer viewer)
it uses MPI as if it was collective? Though it might not be because hdf5 could have been built without MPI
So if I build PETSc with a non-MPI hdf5 and yet use the hdf5 viewer in parallel; do the generated hdf5 files contain garbage?
It seems to me we need to have hdf5.py REQUIRE the existence of H5Pset_fapl_mpio?
Barry
More information about the petsc-dev
mailing list