[MOAB-dev] 137GB mesh file

Grindeanu, Iulian R. iulian at mcs.anl.gov
Thu Jul 31 08:30:12 CDT 2014


Hi Rajeev,
You can use mbsize -p0 option
This will load the file in parallel, but the printout will be mangled between processors, each processor will write its count (-p1 will resolve shared entities, too), -p2 will exchange a layer of ghost elements)

so you can do something like this
mpiexec -np 64 mbsize -p0  <file_name>

How many partitions do you have?

Do you have partitions? (parallel partitions?)

You can also modify HelloParMOAB example for you needs (harder)

LoadPartial example can load only a few partitions, and write a smaller file, it can be run in serial

Something like
./LoadPartial <file_name> PARALLEL_PARTITION 0 1 5

This will load only partitions 0, 1 and 5, in serial, and write a part.h5m file, for which you can do an mbsize in serial (if it is small enough)

All these methods will ensure at least that you have a "readable" file .

I think we need to modify mbsize to run better in parallel (for example, to gather all counts from each processor, for the owned entities) and print only once , something like HelloParMOAB

Iulian

________________________________
From: moab-dev-bounces at mcs.anl.gov [moab-dev-bounces at mcs.anl.gov] on behalf of Rajeev Jain [jain at mcs.anl.gov]
Sent: Wednesday, July 30, 2014 11:40 PM
To: MOAB Dev
Subject: [MOAB-dev] 137GB mesh file

I have an ABTR model (billions of hex elements, created by MeshKit/rgg) that is 137GB on fusion at ANL:
/homes/jain/sigma/moab/lasso/meshkit/rgg/test/c_abtr_sh

I'm unable to do mbsize on it -> any suggestions?
Just want to make sure if this is a valid file, also want to know material/boundary sets contained in it.

Rajeev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/moab-dev/attachments/20140731/d1ccca21/attachment.html>


More information about the moab-dev mailing list