[petsc-users] SNES ex12 visualization

Kong, Fande fande.kong at inl.gov
Thu Sep 14 12:56:15 CDT 2017


On Thu, Sep 14, 2017 at 11:26 AM, Matthew Knepley <knepley at gmail.com> wrote:

> On Thu, Sep 14, 2017 at 1:07 PM, Kong, Fande <fande.kong at inl.gov> wrote:
>
>>
>>
>> On Thu, Sep 14, 2017 at 10:35 AM, Barry Smith <bsmith at mcs.anl.gov> wrote:
>>
>>>
>>> > On Sep 14, 2017, at 11:10 AM, Kong, Fande <fande.kong at inl.gov> wrote:
>>> >
>>> >
>>> >
>>> > On Thu, Sep 14, 2017 at 9:47 AM, Matthew Knepley <knepley at gmail.com>
>>> wrote:
>>> > On Thu, Sep 14, 2017 at 11:43 AM, Adriano Côrtes <
>>> adrimacortes at gmail.com> wrote:
>>> > Dear Matthew,
>>> >
>>> > Thank you for your return. It worked, but this prompts another
>>> question. So why PetscViewer does not write both files (.h5 and .xmf)
>>> directly, instead of having to post-proc the .h5 file (in serial)?
>>> >
>>> > 1) Maintenance: Changing the Python is much easier than changing the C
>>> you would add to generate it
>>> >
>>> > 2) Performance: On big parallel system, writing files is expensive so
>>> I wanted to minimize what I had to do.
>>> >
>>> > 3) Robustness: Moving 1 file around is much easier than remembering 2.
>>> I just always regenerate the xdmf when needed.
>>> >
>>> > And what about big 3D simulations? PETSc always serialize the output
>>> of the distributed dmplex? Is there a way to output one .h5 per mesh
>>> partition?
>>> >
>>> > Given the way I/O is structured on big machines, we believe the
>>> multiple file route is a huge mistake. Also, all our measurements
>>> > say that sending some data on the network is not noticeable given the
>>> disk access costs.
>>> >
>>> > I have slightly different things here. We tried the serial output, it
>>> looks really slow for large-scale problems, and the first processor often
>>> runs out of memory because of gathering all data from other processor cores.
>>>
>>>   Where in PETSc is this?  What type of viewer? Is there an example that
>>> reproduces the problem? Even when we do not use MPI IO in PETSc we attempt
>>> to not "put the entire object on the first process" so memory should not be
>>> an issue. For example VecVew() should memory scale both with or without MPI
>>> IO
>>>
>>
>> We manually gather all data to the first processor core, and write it as
>> a single vtk file.
>>
>
> Of course I am not doing that. I reduce everything to an ISView or a
> VecView call. That way it uses MPI I/O if its turned on.
>

I meant Fande manually gathers  all data to the first processor core in his
in-house code.


>
>    Matt
>
>
>>
>>>
>>> > The parallel IO runs smoothly and much faster than I excepted. We have
>>> done experiments with ten thousands  of cores for a problem with 1 billion
>>> of unknowns.
>>>
>>>     Is this your own canned IO or something in PETSc?
>>>
>>
>> We implement the writer based on the ISView and VecView with HDF5 viewer
>>  in PETSc to output all data as a single HDF. ISView and VecView do the
>> magic job for me.
>>
>>
>>
>>>
>>> > I did not see any concern so far.
>>>
>>>    Ten thousand files is possibly manageable but I question 2 million.
>>>
>>
>> Just one single HDF5 file.
>>
>> Fande,
>>
>>
>>>
>>> >
>>> >
>>> > Fande,
>>> >
>>> >
>>> >   Thanks,
>>> >
>>> >     Matt
>>> >
>>> > Best regards,
>>> > Adriano.
>>> >
>>> >
>>> > 2017-09-14 12:00 GMT-03:00 Matthew Knepley <knepley at gmail.com>:
>>> > On Thu, Sep 14, 2017 at 10:30 AM, Adriano Côrtes <
>>> adrimacortes at gmail.com> wrote:
>>> > Dear all,
>>> >
>>> > I am running the SNES ex12  and I'm passing the options -dm_view
>>> hdf5:sol.h5 -vec_view hdf5:sol.h5::append to generate an output file. The
>>> .h5 file is generated, but I'm not being able to load it in Paraview
>>> (5.4.0-64bits). Paraview recognizes the file and offers severel options to
>>> read it, here is the complete list
>>> >
>>> > Chombo Files
>>> > GTC Files
>>> > M3DC1 Files
>>> > Multilevel 3D Plasma Files
>>> > PFLOTRAN Files
>>> > Pixie Files
>>> > Tetrad Files
>>> > UNIC Files
>>> > VizSchema Files
>>> >
>>> > The problem is none of the options above work :(
>>> > I'm using the configure option '-download-hdf5' and it installs hdf5
>>> version 1.8.18
>>> > Any hint of how to fix it and have the visualization working?
>>> >
>>> > Yes, Paraview does not directly read HDF5. It needs you to tell it
>>> what the data in the HDF5 file means. You do
>>> > this by creating a *.xdmf file, which is XML. We provide a tool
>>> >
>>> >   $PETSC_DIR/bin/petsc_gen_xdmf.py <HDF5 file>
>>> >
>>> > which should automatically produce this file for you. Let us know if
>>> it does not work.
>>> >
>>> >   Thanks,
>>> >
>>> >     Matt
>>> >
>>> >
>>> > Best regards,
>>> > Adriano.
>>> >
>>> > --
>>> > Adriano Côrtes
>>> > =================================================
>>> > Campus Duque de Caxias and
>>> > High-performance Computing Center (NACAD/COPPE)
>>> > Federal University of Rio de Janeiro (UFRJ)
>>> >
>>> >
>>> >
>>> > --
>>> > What most experimenters take for granted before they begin their
>>> experiments is infinitely more interesting than any results to which their
>>> experiments lead.
>>> > -- Norbert Wiener
>>> >
>>> > https://urldefense.proofpoint.com/v2/url?u=http-3A__www.caam
>>> .rice.edu_-7Emk51_&d=DwIFaQ&c=54IZrppPQZKX9mLzcGdPfFD1hxrcB_
>>> _aEkJFOKJFd00&r=DUUt3SRGI0_JgtNaS3udV68GRkgV4ts7XKfj2opmiCY&
>>> m=YTLjMkjfS0tYLZ3RxmJFoe8BT56h48axFCzaadZwBXA&s=iLsaHQugaY4g
>>> j4DKE9gq8XdBt7q3ejdpDRfJ8RFerE0&e=
>>> >
>>> >
>>> >
>>> > --
>>> > Adriano Côrtes
>>> > =================================================
>>> > Campus Duque de Caxias and
>>> > High-performance Computing Center (NACAD/COPPE)
>>> > Federal University of Rio de Janeiro (UFRJ)
>>> >
>>> >
>>> >
>>> > --
>>> > What most experimenters take for granted before they begin their
>>> experiments is infinitely more interesting than any results to which their
>>> experiments lead.
>>> > -- Norbert Wiener
>>> >
>>> > https://urldefense.proofpoint.com/v2/url?u=http-3A__www.caam
>>> .rice.edu_-7Emk51_&d=DwIFaQ&c=54IZrppPQZKX9mLzcGdPfFD1hxrcB_
>>> _aEkJFOKJFd00&r=DUUt3SRGI0_JgtNaS3udV68GRkgV4ts7XKfj2opmiCY&
>>> m=YTLjMkjfS0tYLZ3RxmJFoe8BT56h48axFCzaadZwBXA&s=iLsaHQugaY4g
>>> j4DKE9gq8XdBt7q3ejdpDRfJ8RFerE0&e=
>>> >
>>>
>>>
>>
>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
> http://www.caam.rice.edu/~mk51/
> <https://urldefense.proofpoint.com/v2/url?u=http-3A__www.caam.rice.edu_-7Emk51_&d=DwMFaQ&c=54IZrppPQZKX9mLzcGdPfFD1hxrcB__aEkJFOKJFd00&r=DUUt3SRGI0_JgtNaS3udV68GRkgV4ts7XKfj2opmiCY&m=2o_-FxtYVq5ZmM0k6MovstGXLWyZ_015zsBHbW6JgaE&s=s6ewDEwkUoF8dxaHMvNEGWAlCxbhXyarJBoLEKcSwaI&e=>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20170914/e1939e6a/attachment.html>


More information about the petsc-users mailing list