<html><head><style type='text/css'>p { margin: 0; }</style></head><body><div style='font-family: times new roman,new york,times,serif; font-size: 12pt; color: #000000'>And of course I am wrong.<br>There is this ParallelComm object/class that can be used to identify what proc we are on, in ReadHDF5.cpp <br><br><hr id="zwchr"><blockquote style="border-left:2px solid rgb(16, 16, 255);margin-left:5px;padding-left:5px;"><style>p { margin: 0; }</style><div style="font-family: times new roman,new york,times,serif; font-size: 12pt; color: #000000">Hello Rob,<br>I think that change has to happen in src/parallel/ReadParallel.cpp<br>I am not sure yet though, Tim would confirm that<br><br>Iulian<br><br><hr id="zwchr"><blockquote style="border-left:2px solid rgb(16, 16, 255);margin-left:5px;padding-left:5px;">Tim knows all this but for the rest of the list, here's the short story:<br><br>MOAB's HDF5 reader and writer have a problem on BlueGene where it will<br>collectively read in initial conditions or write output, and run out<br>of memory. This out-of-memory condition comes from MOAB doing all the<br>right things -- using HDF5, using collective I/O -- but the MPI-IO<br>library on Intrepid goes and consumes too much memory.<br><br>I've got one approach to deal with the MPI-IO memory issue for writes.<br>This approach would sort of work for the reads, but what is really<br>needed is for rank 0 to do the read and broadcast the result to<br>everyone. <br><br>So, I'm looking for a little help understanding MOAB's read side of<br>the code. Conceptually, all processes read the table of entities. <br><br>A fairly small 'mbconvert' job will run out of memory: <br><br>512 nodes, 2048 processors:<br><br>======<br>NODES=512<br>CORES=$(($NODES * 4))<br>cd /intrepid-fs0/users/robl/scratch/moab-test<br><br>cqsub -t 15 -m vn -p SSSPP -e MPIRUN_LABEL=1:BG_COREDUMPONEXIT=1 \<br> -n $NODES -c $CORES /home/robl/src/moab-svn/build/tools/mbconvert\<br> -O CPUTIME -O PARALLEL_GHOSTS=3.0.1 -O PARALLEL=READ_PART \<br> -O PARALLEL_RESOLVE_SHARED_ENTS -O PARTITION -t \<br> -o CPUTIME -o PARALLEL=WRITE_PART /intrepid-fs0/users/tautges/persistent/meshes/2bricks/nogeom/64bricks_8mtet_ng_rib_${CORES}.h5m \<br> /intrepid-fs0/users/robl/scratch/moab/8mtet_ng-${CORES}-out.h5m<br>======<br><br>I'm kind of stumbling around ReadHDF5::load_file and<br>ReadHDF5::load_file_partial trying to find a spot where a collection<br>of tags are read into memory. I'd like to, instead of having all<br>processors do the read, have just one processor read and then send the<br>tag data to the other processors.<br><br>First, do I remember the basic MOAB concept correctly: that early on<br>every process reads the exact same tables out of the (in this case<br>HDF5) file? <br><br>If I want rank 0 to do all the work and send data to other ranks,<br>where's the best place to slip that in? It's been a while since I did<br>anything non-trivial in C++, so some of these data structures are kind<br>of greek to me.<br><br>thanks<br>==rob<br><br>-- <br>Rob Latham<br>Mathematics and Computer Science Division<br>Argonne National Lab, IL USA<br></blockquote><br></div></blockquote><br></div></body></html>