[MOAB-dev] problems reading meshes in MOAB in the 'read_part' mode

Dmitry Karpeev karpeev at mcs.anl.gov
Thu Apr 1 13:01:26 CDT 2010


I've run several cases of mbparallelcomm_test with 3 different files
on my laptop,
and I conclude from the results sketched below that the 'read_part'
mode is currently
broken.

For these runs I used MOAB/trunk rev 3733 and three files:
64bricks_1khex_256.h5m
64bricks_1mhex_1024.h5m and
tjunc6RIB_16.h5m.
All of these can be found in my MCS home directory in
~karpeev/fathom/moab/data/64bricks and
~karpeev/fathom/moab/data/tjunc6 respectively.
I made sure that my local laptop copies are the same as those on the
MCS machines.

When I attempt to read any of the above 3 files using
mbparallelcomm_test on 1 or 2 procs,
(using mpiexec even in the uniproc case), using 'read_part', the run
fails producing more or less this output:
--------------------------------------------------------------------------------------------------------------------------------------------
Read times: -1.27014e+09 0 0 (PARALLEL READ PART/PARALLEL
RESOLVE_SHARED_ENTS/PARALLEL EXCHANGE_GHOSTS/)
Couldn't read mesh; error message:
(none)
application called MPI_Abort(MPI_COMM_WORLD, 0) - process 0
rank 0 in job 28  hal_57232   caused collective abort of all ranks
  exit status of rank 0: return code 0
---------------------------------------------------------------------------------------------------------------------------------------------

What is significant here is that 64bricks_1khex_256.h5m appears to be
badly partitioned:
reading it in the 'bcast_delete' mode results in "Number of procs
greater than number of partitions."
error, both with 1 and 2 procs.  HOWEVER, when attempting to read it
in the 'read_part' mode, the error
for this file is the same as for the other two files (delineated
above). I am guessing that whatever bug exists
in 'read_part' trumps the malpartitioning of 64bricks_1khex_256.h5m

Hopefully, this is of some use in tracking down the problem with 'read_part'.

Thanks.
Dmitry.


More information about the moab-dev mailing list