[petsc-dev] problem with hdf5 plots

Kong, Fande fande.kong at inl.gov
Mon Oct 24 13:03:47 CDT 2016


Hi Mark,

If you use paraview to visualize the xdmf file generated by
petsc_gen_xdmf.py, you need to uncheck the last time step in paraview.
Because petsc_gen_xdmf.py creates one more  time step than what paraview
needs.

Fande,


On Mon, Oct 24, 2016 at 11:35 AM, Mark Adams <mfadams at lbl.gov> wrote:

> FYI, by commenting out these lines in VecView_Plex_Local_HDF5, I can now
> see my data;
>
>   /* ierr = DMGetOutputSequenceNumber(dm, &seqnum, &seqval);CHKERRQ(ierr);
> */
>   /* ierr = PetscViewerHDF5SetTimestep(viewer, seqnum);CHKERRQ(ierr); */
>   /* ierr = DMSequenceView_HDF5(dm, "time", seqnum, (PetscScalar) seqval,
> viewer);CHKERRQ(ierr); */
>
> I'm not sure what the problem is here. Something in the
> DMSequenceView_HDF5 and/or petsc_gen_xdmf.py. I tried this with ParaView
> and got an error message so it looks like it is not a Visit problem.
>
>
>
> On Sat, Oct 22, 2016 at 1:51 PM, Mark Adams <mfadams at lbl.gov> wrote:
>
>> I have tried a million things but I have a strange error that I am not
>> fixing and wanted to see if anyone has anh ideas.
>>
>> I am printing an 8 field cell centered 3D field (960 variables total). I
>> initialize the field and print it. It looks perfect. I then do a TSSolve
>> and it looks fine except all the data is 0.0.
>>
>> But If I do a VecView to standard just before the HDF5 print of the
>> vector, it has good looking data. And I added a print statement in pdvec.c
>> before the call to HDF5 like:
>>
>> int i;
>> PetscPrintf(PETSC_COMM_WORLD,"%s call PetscStackCallHDF5
>> xin->map->n=%6D: ",__FUNCT__,xin->map->n);
>> for (i=0;i<xin->map->n && i<3;i++){
>>  PetscPrintf(PETSC_COMM_WORLD,"%10.2e ",x[i]);
>> }
>> PetscPrintf(PETSC_COMM_WORLD,"\n");
>>   PetscStackCallHDF5(H5Dwrite,(dset_id, memscalartype, memspace,
>> filespace, plist_id, x));
>>
>> And this (appended) is what I get on one processor with valgrind. I see
>> perfectly good data plus the time step data in the second call. There is a
>> valgrind message in the first (good) call.
>>
>> I wonder if:  ${PETSC_DIR}/bin/petsc_gen_xdmf.py ex3.h5; is getting
>> confused with the time data. Anyone know of a way to look at the .h5 (raw)
>> data?
>>
>> Mark
>>
>> VecView_MPI_HDF5 call PetscStackCallHDF5 xin->map->n=   525:   5.01e+02
>> 0.00e+00  -3.85e-16
>> ==94708== Use of uninitialised value of size 8
>> ==94708==    at 0x1019405BB: H5D__chunk_lookup (in
>> ./ex3.arch-macosx-gnu-g)
>> ==94708==    by 0x10195757A: H5D__chunk_collective_io (in
>> ./ex3.arch-macosx-gnu-g)
>> ==94708==    by 0x101959BAF: H5D__chunk_collective_write (in
>> ./ex3.arch-macosx-gnu-g)
>> ==94708==    by 0x101954E4A: H5Dwrite (in ./ex3.arch-macosx-gnu-g)
>> ==94708==    by 0x100304C6B: VecView_MPI_HDF5 (pdvec.c:762)
>> ==94708==    by 0x1002DEE52: VecView_Seq (bvec2.c:654)
>> ==94708==    by 0x100317B55: VecView (vector.c:616)
>> ==94708==    by 0x100DACCE6: DMPlexWriteCoordinates_HDF5_Static
>> (plexhdf5.c:422)
>> ==94708==    by 0x100DAA23A: DMPlexView_HDF5 (plexhdf5.c:519)
>> ==94708==    by 0x100C60109: DMView_Plex (plex.c:829)
>> ==94708==    by 0x1009DC80E: DMView (dm.c:851)
>> ==94708==    by 0x100B505D1: DMView_HDF5_p8est (pforest.c:1417)
>> ==94708==    by 0x100B3F8E8: DMView_p8est (pforest.c:1440)
>> ==94708==    by 0x1009DC80E: DMView (dm.c:851)
>> ==94708==    by 0x10006CCBD: PetscObjectView (destroy.c:106)
>> ==94708==    by 0x100093282: PetscObjectViewFromOptions (options.c:2810)
>> ==94708==    by 0x1000277BF: DMViewFromOptions (in
>> ./ex3.arch-macosx-gnu-g)
>> ==94708==    by 0x10001F7E1: viewDMVec (in ./ex3.arch-macosx-gnu-g)
>> ==94708==    by 0x10001799F: main (in ./ex3.arch-macosx-gnu-g)
>> ==94708==
>> VecView_MPI_HDF5 call PetscStackCallHDF5 xin->map->n=  1664:   1.00e+00
>> 2.97e-03  -1.90e-03
>> VecView_MPI_HDF5 call PetscStackCallHDF5 xin->map->n=   960:   1.00e+00
>> 2.97e-03  -1.90e-03
>> 0 TS dt 1.79271 time 0.
>> 1 TS dt 1.79271 time 1.79271
>> 2 TS dt 1.79271 time 3.58542
>> TS Object: 1 MPI processes
>>   type: ssp
>>   maximum steps=2
>>   maximum time=1e+12
>>   total number of nonlinear solver iterations=0
>>   total number of nonlinear solve failures=0
>>   total number of linear solver iterations=0
>>   total number of rejected steps=0
>>   using relative error tolerance of 0.0001,   using absolute error
>> tolerance of 0.0001
>>     Scheme: rks2
>> CONVERGED_ITS at time 3.58542 after 2 steps
>> VecView_MPI_HDF5 call PetscStackCallHDF5 xin->map->n=   525:   5.01e+02
>> 0.00e+00  -3.85e-16
>> VecView_MPI_HDF5 call PetscStackCallHDF5 xin->map->n=     1:   3.59e+00
>> VecView_MPI_HDF5 call PetscStackCallHDF5 xin->map->n=     1:   3.59e+00
>> VecView_MPI_HDF5 call PetscStackCallHDF5 xin->map->n=  1664:   1.02e+00
>> 4.52e-04  -3.37e-03
>> VecView_MPI_HDF5 call PetscStackCallHDF5 xin->map->n=   960:   1.02e+00
>> 4.52e-04  -3.37e-03
>> [0] done - cleanup
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20161024/a68f9c5d/attachment.html>


More information about the petsc-dev mailing list