[petsc-users] HDF5Viewer only on worker 0?
Barry Smith
bsmith at mcs.anl.gov
Tue Jan 12 18:30:13 CST 2016
Katherine,
Assuming the vectors are not so large that the entire thing cannot fit on the first process you could do something like
VecScatterCreateToZero(vec, &scatter,&veczero);
VecScatterBegin/End(scatter,vec,veczero);
if (!rank) {
> PetscViewer hdf5viewer;
> PetscViewerHDF5Open( PETSC_COMM_SELF filename, FILE_MODE_WRITE, &hdf5viewer);
VecView(vzero,hdf5viewer);
}
Not that if your vec came from a DMDA then you need to first do a DMDAGlobalToNaturalBegin/End() to get a vector in the right ordering to pass to VecScatterCreateToZero().
On the other hand if the vectors are enormous and cannot fit on one process it would be more involved. Essentially you would
need to copy VecView_MPI_Binary() and modify it to write out to HDF a part at a time instead of the binary format it does now.
Barry
> On Jan 12, 2016, at 3:20 PM, Katharine Hyatt <kshyatt at physics.ucsb.edu> wrote:
>
> Hello,
>
> I’m trying to use PETsc’s HDF5Viewers on a system that doesn’t support parallel HDF5. When I tried naively using
>
> PetscViewer hdf5viewer;
> PetscViewerHDF5Open( PETSC_COMM_WORLD, filename, FILE_MODE_WRITE, &hdf5viewer);
>
> I get a segfault because ADIOI can’t lock. So I switched to using the binary format, which routes everything through one CPU. Then my job can output successfully. But I would like to use HDF5 without any intermediate steps, and reading the documentation it was unclear to me if it is possible to ask for behavior similar to the binary viewers from the HDF5 ones - everyone sends their information to worker 0, who then does single-process I/O. Is this possible?
>
> Thanks,
> Katharine
More information about the petsc-users
mailing list