[petsc-dev] DMplex reader / viewers
Jed Brown
jed at jedbrown.org
Tue Jan 21 23:23:55 CST 2014
"Gorman, Gerard J" <g.gorman at imperial.ac.uk> writes:
>>> a. scatter back to a single I/O node and use sequential I/O using
>>> the ordering of the original (exodus) mesh. This allows reading and
>>> writing on an arbitrary number of processors, but has potential
>>> memory footprint and performance issues. How large a mesh can we
>>> reasonably expect to be able to handle this way?
>
> Personally I would stay far away from this option. Other than being a
> terrible serial bottleneck, it’s a major headache when you just want
> to run something just a little bit bigger than what happens to fit
> within a single node…
The VTU viewer does it in a memory-scalable way. The only reason to do
this is to interact with a file format to which we cannot do parallel
writes. For VTU (XML with binary-appended), we could use MPI-IO for the
binary appended data. The Exodus library (formerly separate in Nemesis)
does pre-decomposed meshes which is a crazy workflow. We want mesh
files to be independent of the number of processors; we'll read in
parallel and partition on the fly. I believe we can do this by using
the NetCDF-4 interface to access the Exodus file directly.
>>> b. Do ?poor man? parallel I/O where each CPU does its own I/O, and
>>> possibly create interface matching files ? la nemesis or
>>> SILO. Maybe, we can save enough information on the parallel layout
>>> in order to easily write an un-partitionner as a post-processor.
>
> I am pretty sure that if we are writing everything in slabs to a HDF5
> container we do not have to worry too much about the parallel layout
> although some clear optimisations are possible. In the worst case it
> is a three stage process of where we perform a parallel read of the
> connectivity, scatter/gather for continuous numbering, parallel
> repartitioning and subsequent parallel read of remaining
> data. Importantly, it is at least scalable.
Yeah, though vis doesn't like collective interfaces let alone
partitioning on the fly, so we'll either need our own reader plugin or
to play some games.
> What magic sauce is used by high order FEM codes such as
> nek500 that can run on ~1m cores?
Nek runs about one element per core so its challenges are different.
> Are there any other formats that we should be considering? It’s a few
> years since I tried playing about with CGNS - at the time its parallel
> IO was non-existent and I have not seen it being pushed since.
We have (crude) CGNS support in PETSc. It's an HDF5 format. I think
it's comparable to Exodus, but Cubit doesn't write it natively.
(Meanwhile, lots of other meshing tools don't write Exodus.)
> XDMF looks interesting as it is essentially some xml metadata and a
> HDF5 bucket.
It feels like VTK to me, except that you can put the arrays in an HDF5
file instead of appending them to the XML file.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 835 bytes
Desc: not available
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20140121/3f4f3979/attachment.sig>
More information about the petsc-dev
mailing list