[MOAB-dev] 137GB mesh file
Grindeanu, Iulian R.
iulian at mcs.anl.gov
Thu Jul 31 21:34:48 CDT 2014
How did you generate it, with rgg?
it will be tough to read it in parallel.
Do you have materials sets on it?
You can maybe use MATERIAL_SET tag to load partially, look at LoadPartial example. (you need to know the material values)
________________________________________
From: Rajeev Jain [jain at mcs.anl.gov]
Sent: Thursday, July 31, 2014 12:14 PM
To: Grindeanu, Iulian R.; Vijay S. Mahadevan
Cc: MOAB Dev
Subject: Re: [MOAB-dev] 137GB mesh file
I don't have parallel partitions, I'm looking into HelloMOABPar example.
Thanks.
Rajeev
----- Original Message -----
From: "Grindeanu, Iulian R." <iulian at mcs.anl.gov>
To: Vijay S. Mahadevan <vijay.m at gmail.com>
Cc: "Jain, Rajeev" <jain at mcs.anl.gov>; MOAB Dev <moab-dev at mcs.anl.gov>
Sent: Thursday, July 31, 2014 10:12 AM
Subject: RE: [MOAB-dev] 137GB mesh file
mbconvert is parallel-capable
mbpart is not
mbsize "is" , but it needs improvements for output
agree we need to review all tools
________________________________________
From: Vijay S. Mahadevan [vijay.m at gmail.com]
Sent: Thursday, July 31, 2014 9:36 AM
To: Grindeanu, Iulian R.
Cc: Jain, Rajeev; MOAB Dev
Subject: Re: [MOAB-dev] 137GB mesh file
> I think we need to modify mbsize to run better in parallel (for example, to
> gather all counts from each processor, for the owned entities) and print
> only once , something like HelloParMOAB
Hmm ok. Make all the tools parallel capable then ? mbsize, mbpart,
mbconvert etc.
Iulian, we need to make a list of things TODO and then we can
prioritize what's needed right now. I'll also get started on a list in
parallel and we can consolidate.
Vijay
On Thu, Jul 31, 2014 at 9:30 AM, Grindeanu, Iulian R.
<iulian at mcs.anl.gov> wrote:
> Hi Rajeev,
> You can use mbsize -p0 option
> This will load the file in parallel, but the printout will be mangled
> between processors, each processor will write its count (-p1 will resolve
> shared entities, too), -p2 will exchange a layer of ghost elements)
>
> so you can do something like this
> mpiexec -np 64 mbsize -p0 <file_name>
>
> How many partitions do you have?
>
> Do you have partitions? (parallel partitions?)
>
> You can also modify HelloParMOAB example for you needs (harder)
>
> LoadPartial example can load only a few partitions, and write a smaller
> file, it can be run in serial
>
> Something like
> ./LoadPartial <file_name> PARALLEL_PARTITION 0 1 5
>
> This will load only partitions 0, 1 and 5, in serial, and write a part.h5m
> file, for which you can do an mbsize in serial (if it is small enough)
>
> All these methods will ensure at least that you have a "readable" file .
>
> I think we need to modify mbsize to run better in parallel (for example, to
> gather all counts from each processor, for the owned entities) and print
> only once , something like HelloParMOAB
>
> Iulian
>
> ________________________________
> From: moab-dev-bounces at mcs.anl.gov [moab-dev-bounces at mcs.anl.gov] on behalf
> of Rajeev Jain [jain at mcs.anl.gov]
> Sent: Wednesday, July 30, 2014 11:40 PM
> To: MOAB Dev
> Subject: [MOAB-dev] 137GB mesh file
>
> I have an ABTR model (billions of hex elements, created by MeshKit/rgg) that
> is 137GB on fusion at ANL:
> /homes/jain/sigma/moab/lasso/meshkit/rgg/test/c_abtr_sh
>
> I'm unable to do mbsize on it -> any suggestions?
> Just want to make sure if this is a valid file, also want to know
> material/boundary sets contained in it.
>
> Rajeev
>
More information about the moab-dev
mailing list