[petsc-users] Parallel writing in HDF5-1.12.0 when some processors have no data to write

Danyang Su danyang.su at gmail.com
Fri Jun 12 01:15:12 CDT 2020


Hi Jed,

Thanks for your double check. 

The HDF 1.10.6 version also works. But versions from 1.12.x stop working.

Attached is the code section where I have problem.

    !c write the dataset collectively
    !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
    !!!! CODE CRASHES HERE IF SOME PROCESSORS HAVE NO DATA TO WRITE!!!!
    !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
    call h5dwrite_f(dset_id, H5T_NATIVE_DOUBLE, dataset, hdf5_dsize,   &
                    hdf5_ierr, file_space_id=filespace,                &
                    mem_space_id=memspace, xfer_prp = xlist_id)

Please let me know if there is something wrong in the code that causes the problem.

Thanks,

Danyang

On 2020-06-11, 8:32 PM, "Jed Brown" <jed at jedbrown.org> wrote:

    Danyang Su <danyang.su at gmail.com> writes:

    > Hi Barry,
    >
    > The HDF5 calls fail. I reconfigure PETSc with HDF 1.10.5 version and it works fine on different platforms. So, it is more likely there is a bug in the latest HDF version.

    I would double-check that you have not subtly violated a collective requirement in the interface, then report to upstream.

-------------- next part --------------
A non-text attachment was scrubbed...
Name: example.F90
Type: application/octet-stream
Size: 5321 bytes
Desc: not available
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20200611/aa1d374f/attachment.obj>


More information about the petsc-users mailing list