[petsc-users] How to read/write unstructured mesh in parallel
Tabrez Ali
stali at geology.wisc.edu
Mon Nov 19 19:15:12 CST 2012
I have used Fortran I/O (serial) where all processes access the same
input file to read part of the mesh. For runs on 1K-4K cores it is
fairly fast. For example it takes under 2 mins to read a unstructured
mesh of 16 million elements (larger mesh is also fine) on 1000 cores.
Though I guess its not good for the file system (Lustre) but then you're
only doing it once.
MPI collective I/O is supposed to be faster as smaller requests get
merged into a single large one. However its binary only so you may be
better off using higher level libs (HDF5/Parallel NetCDF). However it
seems that after certain number of processors even that gets too slow,
e.g., see sec 4.2 in the following paper
http://climate.ornl.gov/~rmills/pubs/mills-PFLOTRAN_CUG2009.pdf
T
On 11/19/2012 06:08 PM, Fande Kong wrote:
> It seems to be a good idea. I have another related question. If all
> processors read the same file at the same time, would it be very slow
> for conflicting access?
>
> On Mon, Nov 19, 2012 at 4:52 PM, Jed Brown <jedbrown at mcs.anl.gov
> <mailto:jedbrown at mcs.anl.gov>> wrote:
>
> On Tue, Nov 20, 2012 at 12:38 AM, Fande Kong <fd.kong at siat.ac.cn
> <mailto:fd.kong at siat.ac.cn>> wrote:
>
> I want to try very 'large' mesh. I guess it should be bottleneck.
>
>
> It's really worth profiling. Having every process read directly is
> no more scalable (frequently less, actually) than having rank 1
> read incrementally and send to whoever should own it. You can do
> full parallel IO, but the code depends on the mesh format.
>
>
>
>
> --
> Fande Kong
> ShenZhen Institutes of Advanced Technology
> Chinese Academy of Sciences
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20121119/b6dbbc9c/attachment-0001.html>
More information about the petsc-users
mailing list