[MOAB-dev] Error message when compiling the mbcoupler_test.cpp

Grindeanu, Iulian R. iulian at mcs.anl.gov
Fri May 13 11:50:36 CDT 2016


so moab is configured without parallel hdf5

either hdf5 is not built with parallel option, or some other error.

we test a similar configuration here, overnight

http://gnep.mcs.anl.gov:8010/builders/moab-download-gnu/builds/57

your config.log would show more info


________________________________
From: Jie Wu [jie.voo at gmail.com]
Sent: Friday, May 13, 2016 11:27 AM
To: Vijay S. Mahadevan
Cc: Grindeanu, Iulian R.; moab-dev at mcs.anl.gov
Subject: Re: [MOAB-dev] Error message when compiling the mbcoupler_test.cpp

Hi Vijay,

Thank you for your reply. The new configuration do works. Here shows the command I inputed and the message while I input: make check.

dyn-160-39-10-173:moab jiewu$ ./configure --with-mpi=/usr/local CC=/usr/local/bin/mpicc CXX=/usr/local/bin/mpic++ FC=/usr/local/bin/mpif90 F77=/usr/local/bin/mpif77 --download-metis --download-hdf5 --download-netcdf --enable-docs --with-doxygen=/Applications/Doxygen.app/Contents/Resources/

The results are

configure: WARNING:
*************************************************************************
*        MOAB has been configured with parallel and HDF5 support
*     but the configured HDF5 library does not support parallel IO.
*            Some parallel IO capabilities will be disabled.
*************************************************************************

After that, I inputted the command, make -j4, and then command make check. Here are the results it shows

/Applications/Xcode.app/Contents/Developer/usr/bin/make  check-TESTS
PASS: elem_util_test
PASS: element_test
FAIL: datacoupler_test
FAIL: mbcoupler_test
FAIL: parcoupler_test
============================================================================
Testsuite summary for MOAB 4.9.2pre
============================================================================
# TOTAL: 5
# PASS:  2
# SKIP:  0
# XFAIL: 0
# FAIL:  3
# XPASS: 0
# ERROR: 0
============================================================================
See tools/mbcoupler/test-suite.log
Please report to moab-dev at mcs.anl.gov<mailto:moab-dev at mcs.anl.gov>
============================================================================
make[4]: *** [test-suite.log] Error 1
make[3]: *** [check-TESTS] Error 2
make[2]: *** [check-am] Error 2
make[1]: *** [check-recursive] Error 1
make: *** [check-recursive] Error 1

Then I was trying to run the mbcouple_test with command
dyn-160-39-10-173:mbcoupler jiewu$    mpiexec -np 2  mbcoupler_test -meshes /Users/wujie/SourceCode/moab/MeshFiles/unittest/64bricks_1khex.h5m /Users/wujie/SourceCode/moab/MeshFiles/unittest/64bricks_12ktet.h5m -itag vertex_field -outfile dum.h5m
Now the results look like,

[0]MOAB ERROR: --------------------- Error Message ------------------------------------
[0]MOAB ERROR: MOAB not configured with parallel HDF5 support!
[0]MOAB ERROR: set_up_read() line 355 in src/io/ReadHDF5.cpp
[0]MOAB ERROR: --------------------- Error Message ------------------------------------
[0]MOAB ERROR: NULL file handle.!
[0]MOAB ERROR: is_error() line 133 in src/io/ReadHDF5.hpp
[1]MOAB ERROR: --------------------- Error Message ------------------------------------
[1]MOAB ERROR: MOAB not configured with parallel HDF5 support!
[1]MOAB ERROR: set_up_read() line 355 in src/io/ReadHDF5.cpp
[1]MOAB ERROR: --------------------- Error Message ------------------------------------
[1]MOAB ERROR: NULL file handle.!
[1]MOAB ERROR: is_error() line 133 in src/io/ReadHDF5.hpp
[0]MOAB ERROR: --------------------- Error Message ------------------------------------
[0]MOAB ERROR: Failed to load file after trying all possible readers!
[0]MOAB ERROR: serial_load_file() line 635 in src/Core.cpp
[0]MOAB ERROR: --------------------- Error Message ------------------------------------
[0]MOAB ERROR: Failed in step PARALLEL READ PART!
[0]MOAB ERROR: load_file() line 553 in src/parallel/ReadParallel.cpp
[0]MOAB ERROR: load_file() line 252 in src/parallel/ReadParallel.cpp
[0]MOAB ERROR: load_file() line 515 in src/Core.cpp
[1]MOAB ERROR: --------------------- Error Message ------------------------------------
[1]MOAB ERROR: Failed to load file after trying all possible readers!
[1]MOAB ERROR: serial_load_file() line 635 in src/Core.cpp
[1]MOAB ERROR: --------------------- Error Message ------------------------------------
[1]MOAB ERROR: Failed in step PARALLEL READ PART!
[1]MOAB ERROR: load_file() line 553 in src/parallel/ReadParallel.cpp
[0]MOAB ERROR: main() line 172 in mbcoupler_test.cpp
[1]MOAB ERROR: load_file() line 252 in src/parallel/ReadParallel.cpp
[1]MOAB ERROR: load_file() line 515 in src/Core.cpp
[1]MOAB ERROR: main() line 172 in mbcoupler_test.cpp
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode 16.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
[dyn-160-39-11-172.dyn.columbia.edu<http://dyn-160-39-11-172.dyn.columbia.edu>:26361] 1 more process has sent help message help-mpi-api.txt / mpi-abort
[dyn-160-39-11-172.dyn.columbia.edu<http://dyn-160-39-11-172.dyn.columbia.edu>:26361] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages
The other excitable file in mbcoupler folder, like ./element_test now can run successfully.

Any direction for this was deeply appreciated!

Best,
Jie

On May 13, 2016, at 11:54 AM, Vijay S. Mahadevan <vijay.m at gmail.com<mailto:vijay.m at gmail.com>> wrote:

I also forgot to mention that you can be more explicit in the
specification of compilers during configuration, if you really want.
This would avoid headaches since the user specified wrappers will
directly be used. For example:

./configure --with-mpi=/usr/local CC=/usr/local/bin/mpicc
CXX=/usr/local/bin/mpic++ FC=/usr/local/bin/mpif90
F77=/usr/local/bin/mpif77 <OTHER_CONFIGURE_OPTIONS>

Vijay

On Fri, May 13, 2016 at 10:52 AM, Vijay S. Mahadevan <vijay.m at gmail.com<mailto:vijay.m at gmail.com>> wrote:
Please use --with-mpi=/usr/local as the configure option. If you have
MPI pre-installed, either through PETSc or natively in your system,
always try to re-use that to maintain consistency. This will help
avoid mix-up of MPI implementations when you try to launch a parallel
job later.

Let us know if the new configuration works.

Vijay

On Thu, May 12, 2016 at 6:32 PM, Jie Wu <jie.voo at gmail.com<mailto:jie.voo at gmail.com>> wrote:
Thank you very much for your reply. I search in my src/moab/MOABConfig.h
file, and I did not find the MOAB_HAVE_MPI. So I think the MOAB on my laptop
cannot run in parallel yet.

I have this from my terminal.

dyn-160-39-10-173:moab jiewu$ which mpic++
/usr/local/bin/mpic++

But I cannot get anything by inputing which mpi. So I don’t know where the
mpi is in my laptop.

dyn-160-39-10-173:moab jiewu$ which mpi

I have petsc installed which has mpich, and linking that directory to the
MOAB configure option also did not work.

Should I install mpi solely? I am afraid it might conflict with the one
installed in petsc.

Is there any method that the mpi could be installed with MOAB automatically,
or maybe other proper manner? Thanks a lot!

Best,
Jie


On May 12, 2016, at 6:11 PM, Vijay S. Mahadevan <vijay.m at gmail.com<mailto:vijay.m at gmail.com>> wrote:

Can you send the config.log. Looks like MPI is not getting enabled
(possibly), even with --dowload-mpich. Or you can check
src/moab/MOABConfig.h in your build directory and grep for
MOAB_HAVE_MPI.

Vijay

On Thu, May 12, 2016 at 4:46 PM, Jie Wu <jie.voo at gmail.com<mailto:jie.voo at gmail.com>> wrote:

Hi Iulian,

Thanks for your reply. I think my configure is with mpi. Here is my
configure command

dyn-160-39-10-173:moab jiewu$    ./configure --download-metis
--download-hdf5 --download-netcdf --download-mpich --enable-docs
--with-doxygen=/Applications/Doxygen.app/Contents/Resources/

Best,
Jie

On May 12, 2016, at 5:43 PM, Grindeanu, Iulian R. <iulian at mcs.anl.gov<mailto:iulian at mcs.anl.gov>>
wrote:

Hi Jie,
Did you configure with mpi? What is your configure command?

Iulian
________________________________
From: moab-dev-bounces at mcs.anl.gov<mailto:moab-dev-bounces at mcs.anl.gov> [moab-dev-bounces at mcs.anl.gov<mailto:moab-dev-bounces at mcs.anl.gov>] on behalf
of Jie Wu [jie.voo at gmail.com<mailto:jie.voo at gmail.com>]
Sent: Thursday, May 12, 2016 4:23 PM
To: moab-dev at mcs.anl.gov<mailto:moab-dev at mcs.anl.gov>
Subject: [MOAB-dev] Error message when compiling the mbcoupler_test.cpp

Hi all,

My name is Jie and I am part of computational mechanics group at civil
engineering dept. of Columbia university.

I am working on large deformation problems which may lead mesh distortions
and a re-meshing become necessary.

I would like to compile mbcoupler_test.cpp to learn how it transfer the
variables from old to the new mesh.

Now I can successfully compile the codes in build/examples and they works
good! But I cannot compile the codes in folder build/tools/mbcoupler by
following the instructions in build/tools/readme.tools, which shows error
message like following.

Do you have any idea for this problem? Thanks a lot!

Best,
Jie

DataCoupler.cpp:136:25: error: member access into incomplete
    type 'moab::ParallelComm'
if (myPcomm && myPcomm->size() > 1) {
                      ^
./DataCoupler.hpp:34:7: note: forward declaration of
    'moab::ParallelComm'
class ParallelComm;
    ^
DataCoupler.cpp:161:12: error: member access into incomplete
    type 'moab::ParallelComm'
  myPcomm->proc_config().crystal_router()->gs_tran...
         ^
./DataCoupler.hpp:34:7: note: forward declaration of
    'moab::ParallelComm'
class ParallelComm;
    ^
DataCoupler.cpp:187:12: error: member access into incomplete
    type 'moab::ParallelComm'
  myPcomm->proc_config().crystal_router()->gs_tran...
         ^
./DataCoupler.hpp:34:7: note: forward declaration of
    'moab::ParallelComm'
class ParallelComm;
    ^
3 errors generated.
make[2]: *** [DataCoupler.lo] Error 1
make[2]: *** Waiting for unfinished jobs....
Coupler.cpp:344:45: error: no member named 'gs_transfer' in
    'moab::gs_data::crystal_data'
  (myPc->proc_config().crystal_router())->gs_trans...
  ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~  ^
Coupler.cpp:388:45: error: no member named 'gs_transfer' in
    'moab::gs_data::crystal_data'
  (myPc->proc_config().crystal_router())->gs_trans...
  ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~  ^
Coupler.cpp:611:45: error: no member named 'gs_transfer' in
    'moab::gs_data::crystal_data'
  (myPc->proc_config().crystal_router())->gs_trans...
  ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~  ^
Coupler.cpp:638:43: error: no member named 'gs_transfer' in
    'moab::gs_data::crystal_data'
  myPc->proc_config().crystal_router()->gs_transfe...
  ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~  ^
4 errors generated.
make[2]: *** [Coupler.lo] Error 1
make[1]: *** [all-recursive] Error 1
make: *** [all-recursive] Error 1





-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/moab-dev/attachments/20160513/b5c434d3/attachment-0001.html>


More information about the moab-dev mailing list