<html><head><style type='text/css'>p { margin: 0; }</style></head><body><div style='font-family: times new roman,new york,times,serif; font-size: 12pt; color: #000000'><br><br><hr id="zwchr"><blockquote id="DWT208" style="border-left:2px solid rgb(16, 16, 255);margin-left:5px;padding-left:5px;">On Fri, Oct 19, 2012 at 01:30:46PM -0700, Mark Miller wrote:<br>> Not sure how much this helps but newest versions of HDF5 library support<br>> reading a file into memory (one I/O operation) and then proc 0 can<br>> broadcast that buffer (single broadcast) and other procs can 'open' that<br>> buffer of bytes as an HDF5 file. So, in theory, with minimal changes to<br>> MOAB, its possible to 'spoof' MOAB into thinking each processor did the<br>> read anyways. One problem; I think this feature works for whole files<br>> only. So, if the tables MOAB needs to read this way are self contained<br>> in a single file, it could work. Otherwise, its not much help...<br>> <br>> This is the 'file image' feature of HDF5.<br><br>I'll take a look at that approach, but on BlueGene pulling in an<br>entire file may not be a viable option. These processors only need<br>one piece of a larger file. In virtual node mode I only have 512<br>MiB in total to work with.<br><br>==rob<br></blockquote>My assumption is that the "file image" feature can be used for a portion of the file, obviously there are files that do not fit on one<br>proc (or on 512 Mb)<br>So Mark is probably suggesting that the "header/tags/set" part of hdf5 read to happen on one proc, and the rest of the processors "think" that they read it directly from file, while in fact they are reading it from the "buffer" (file image)? . Am I wrong in my understanding? <br>Right now, hdf5 reader in moab has to read on each processor the header + something more, like some sets information. I am not sure what exactly is<br>read by each processor, at a minimum. I will look into the code and try to figure it out.<br><br>Iulian<br><br><blockquote style="border-left:2px solid rgb(16, 16, 255);margin-left:5px;padding-left:5px;"><br>> Mark<br>> <br>> On Fri, 2012-10-19 at 15:16 -0500, Iulian Grindeanu wrote:<br>> > Hello Rob,<br>> > I think that change has to happen in src/parallel/ReadParallel.cpp<br>> > I am not sure yet though, Tim would confirm that<br>> > <br>> > Iulian<br>> > <br>> > <br>> > ______________________________________________________________________<br>> > Tim knows all this but for the rest of the list, here's the<br>> > short story:<br>> > <br>> > MOAB's HDF5 reader and writer have a problem on BlueGene where<br>> > it will<br>> > collectively read in initial conditions or write output, and<br>> > run out<br>> > of memory. This out-of-memory condition comes from MOAB doing<br>> > all the<br>> > right things -- using HDF5, using collective I/O -- but the<br>> > MPI-IO<br>> > library on Intrepid goes and consumes too much memory.<br>> > <br>> > I've got one approach to deal with the MPI-IO memory issue for<br>> > writes.<br>> > This approach would sort of work for the reads, but what is<br>> > really<br>> > needed is for rank 0 to do the read and broadcast the result<br>> > to<br>> > everyone. <br>> > <br>> > So, I'm looking for a little help understanding MOAB's read<br>> > side of<br>> > the code. Conceptually, all processes read the table of<br>> > entities. <br>> > <br>> > A fairly small 'mbconvert' job will run out of memory: <br>> > <br>> > 512 nodes, 2048 processors:<br>> > <br>> > ======<br>> > NODES=512<br>> > CORES=$(($NODES * 4))<br>> > cd /intrepid-fs0/users/robl/scratch/moab-test<br>> > <br>> > cqsub -t 15 -m vn -p SSSPP -e<br>> > MPIRUN_LABEL=1:BG_COREDUMPONEXIT=1 \<br>> > -n $NODES -c<br>> > $CORES /home/robl/src/moab-svn/build/tools/mbconvert\<br>> > -O CPUTIME -O PARALLEL_GHOSTS=3.0.1 -O<br>> > PARALLEL=READ_PART \<br>> > -O PARALLEL_RESOLVE_SHARED_ENTS -O PARTITION -t \<br>> > -o CPUTIME -o<br>> > PARALLEL=WRITE_PART /intrepid-fs0/users/tautges/persistent/meshes/2bricks/nogeom/64bricks_8mtet_ng_rib_${CORES}.h5m \<br>> > <br>> > /intrepid-fs0/users/robl/scratch/moab/8mtet_ng-${CORES}-out.h5m<br>> > ======<br>> > <br>> > I'm kind of stumbling around ReadHDF5::load_file and<br>> > ReadHDF5::load_file_partial trying to find a spot where a<br>> > collection<br>> > of tags are read into memory. I'd like to, instead of having<br>> > all<br>> > processors do the read, have just one processor read and then<br>> > send the<br>> > tag data to the other processors.<br>> > <br>> > First, do I remember the basic MOAB concept correctly: that<br>> > early on<br>> > every process reads the exact same tables out of the (in this<br>> > case<br>> > HDF5) file? <br>> > <br>> > If I want rank 0 to do all the work and send data to other<br>> > ranks,<br>> > where's the best place to slip that in? It's been a while<br>> > since I did<br>> > anything non-trivial in C++, so some of these data structures<br>> > are kind<br>> > of greek to me.<br>> > <br>> > thanks<br>> > ==rob<br>> > <br>> > -- <br>> > Rob Latham<br>> > Mathematics and Computer Science Division<br>> > Argonne National Lab, IL USA<br>> > <br>> > <br><br>-- <br>Rob Latham<br>Mathematics and Computer Science Division<br>Argonne National Lab, IL USA<br></blockquote><br></div></body></html>