<html><head><meta http-equiv="Content-Type" content="text/html charset=utf-8"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" class=""><div class=""><div class=""><span style="font-variant-ligatures: no-common-ligatures" class=""></span></div>
<br class=""><div><blockquote type="cite" class=""><div class="">On May 13, 2016, at 1:08 PM, Vijay S. Mahadevan <<a href="mailto:vijay.m@gmail.com" class="">vijay.m@gmail.com</a>> wrote:</div><br class="Apple-interchange-newline"><div class=""><div class="">Jie, since you now changed the compilers used to configure the stack,<br class="">I would advise deleting the auto-installed packages under sandbox<br class="">directory and then re-configuring. This will force the rebuild of<br class="">HDF5/NetCDF packages with the parallel MPI compilers and you should<br class="">get the HDF5-parallel support enabled after that. Makes sense ?<br class=""><br class="">Vijay<br class=""><br class="">On Fri, May 13, 2016 at 11:27 AM, Jie Wu <<a href="mailto:jie.voo@gmail.com" class="">jie.voo@gmail.com</a>> wrote:<br class=""><blockquote type="cite" class="">Hi Vijay,<br class=""><br class="">Thank you for your reply. The new configuration do works. Here shows the<br class="">command I inputed and the message while I input: make check.<br class=""><br class="">dyn-160-39-10-173:moab jiewu$ ./configure --with-mpi=/usr/local<br class="">CC=/usr/local/bin/mpicc CXX=/usr/local/bin/mpic++ FC=/usr/local/bin/mpif90<br class="">F77=/usr/local/bin/mpif77 --download-metis --download-hdf5 --download-netcdf<br class="">--enable-docs --with-doxygen=/Applications/Doxygen.app/Contents/Resources/<br class=""><br class="">The results are<br class=""><br class="">configure: WARNING:<br class="">*************************************************************************<br class="">* MOAB has been configured with parallel and HDF5 support<br class="">* but the configured HDF5 library does not support parallel IO.<br class="">* Some parallel IO capabilities will be disabled.<br class="">*************************************************************************<br class=""><br class="">After that, I inputted the command, make -j4, and then command make check.<br class="">Here are the results it shows<br class=""><br class="">/Applications/Xcode.app/Contents/Developer/usr/bin/make check-TESTS<br class="">PASS: elem_util_test<br class="">PASS: element_test<br class="">FAIL: datacoupler_test<br class="">FAIL: mbcoupler_test<br class="">FAIL: parcoupler_test<br class="">============================================================================<br class="">Testsuite summary for MOAB 4.9.2pre<br class="">============================================================================<br class=""># TOTAL: 5<br class=""># PASS: 2<br class=""># SKIP: 0<br class=""># XFAIL: 0<br class=""># FAIL: 3<br class=""># XPASS: 0<br class=""># ERROR: 0<br class="">============================================================================<br class="">See tools/mbcoupler/test-suite.log<br class="">Please report to <a href="mailto:moab-dev@mcs.anl.gov" class="">moab-dev@mcs.anl.gov</a><br class="">============================================================================<br class="">make[4]: *** [test-suite.log] Error 1<br class="">make[3]: *** [check-TESTS] Error 2<br class="">make[2]: *** [check-am] Error 2<br class="">make[1]: *** [check-recursive] Error 1<br class="">make: *** [check-recursive] Error 1<br class=""><br class="">Then I was trying to run the mbcouple_test with command<br class="">dyn-160-39-10-173:mbcoupler jiewu$ mpiexec -np 2 mbcoupler_test -meshes<br class="">/Users/wujie/SourceCode/moab/MeshFiles/unittest/64bricks_1khex.h5m<br class="">/Users/wujie/SourceCode/moab/MeshFiles/unittest/64bricks_12ktet.h5m -itag<br class="">vertex_field -outfile dum.h5m<br class="">Now the results look like,<br class=""><br class="">[0]MOAB ERROR: --------------------- Error Message<br class="">------------------------------------<br class="">[0]MOAB ERROR: MOAB not configured with parallel HDF5 support!<br class="">[0]MOAB ERROR: set_up_read() line 355 in src/io/ReadHDF5.cpp<br class="">[0]MOAB ERROR: --------------------- Error Message<br class="">------------------------------------<br class="">[0]MOAB ERROR: NULL file handle.!<br class="">[0]MOAB ERROR: is_error() line 133 in src/io/ReadHDF5.hpp<br class="">[1]MOAB ERROR: --------------------- Error Message<br class="">------------------------------------<br class="">[1]MOAB ERROR: MOAB not configured with parallel HDF5 support!<br class="">[1]MOAB ERROR: set_up_read() line 355 in src/io/ReadHDF5.cpp<br class="">[1]MOAB ERROR: --------------------- Error Message<br class="">------------------------------------<br class="">[1]MOAB ERROR: NULL file handle.!<br class="">[1]MOAB ERROR: is_error() line 133 in src/io/ReadHDF5.hpp<br class="">[0]MOAB ERROR: --------------------- Error Message<br class="">------------------------------------<br class="">[0]MOAB ERROR: Failed to load file after trying all possible readers!<br class="">[0]MOAB ERROR: serial_load_file() line 635 in src/Core.cpp<br class="">[0]MOAB ERROR: --------------------- Error Message<br class="">------------------------------------<br class="">[0]MOAB ERROR: Failed in step PARALLEL READ PART!<br class="">[0]MOAB ERROR: load_file() line 553 in src/parallel/ReadParallel.cpp<br class="">[0]MOAB ERROR: load_file() line 252 in src/parallel/ReadParallel.cpp<br class="">[0]MOAB ERROR: load_file() line 515 in src/Core.cpp<br class="">[1]MOAB ERROR: --------------------- Error Message<br class="">------------------------------------<br class="">[1]MOAB ERROR: Failed to load file after trying all possible readers!<br class="">[1]MOAB ERROR: serial_load_file() line 635 in src/Core.cpp<br class="">[1]MOAB ERROR: --------------------- Error Message<br class="">------------------------------------<br class="">[1]MOAB ERROR: Failed in step PARALLEL READ PART!<br class="">[1]MOAB ERROR: load_file() line 553 in src/parallel/ReadParallel.cpp<br class="">[0]MOAB ERROR: main() line 172 in mbcoupler_test.cpp<br class="">[1]MOAB ERROR: load_file() line 252 in src/parallel/ReadParallel.cpp<br class="">[1]MOAB ERROR: load_file() line 515 in src/Core.cpp<br class="">[1]MOAB ERROR: main() line 172 in mbcoupler_test.cpp<br class="">--------------------------------------------------------------------------<br class="">MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD<br class="">with errorcode 16.<br class=""><br class="">NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.<br class="">You may or may not see output from other processes, depending on<br class="">exactly when Open MPI kills them.<br class="">--------------------------------------------------------------------------<br class="">[<a href="http://dyn-160-39-11-172.dyn.columbia.edu" class="">dyn-160-39-11-172.dyn.columbia.edu</a>:26361] 1 more process has sent help<br class="">message help-mpi-api.txt / mpi-abort<br class="">[<a href="http://dyn-160-39-11-172.dyn.columbia.edu" class="">dyn-160-39-11-172.dyn.columbia.edu</a>:26361] Set MCA parameter<br class="">"orte_base_help_aggregate" to 0 to see all help / error messages<br class="">The other excitable file in mbcoupler folder, like ./element_test now can<br class="">run successfully.<br class=""><br class="">Any direction for this was deeply appreciated!<br class=""><br class="">Best,<br class="">Jie<br class=""><br class=""><br class="">On May 13, 2016, at 11:54 AM, Vijay S. Mahadevan <<a href="mailto:vijay.m@gmail.com" class="">vijay.m@gmail.com</a>> wrote:<br class=""><br class="">I also forgot to mention that you can be more explicit in the<br class="">specification of compilers during configuration, if you really want.<br class="">This would avoid headaches since the user specified wrappers will<br class="">directly be used. For example:<br class=""><br class="">./configure --with-mpi=/usr/local CC=/usr/local/bin/mpicc<br class="">CXX=/usr/local/bin/mpic++ FC=/usr/local/bin/mpif90<br class="">F77=/usr/local/bin/mpif77 <OTHER_CONFIGURE_OPTIONS><br class=""><br class="">Vijay<br class=""><br class="">On Fri, May 13, 2016 at 10:52 AM, Vijay S. Mahadevan <<a href="mailto:vijay.m@gmail.com" class="">vijay.m@gmail.com</a>><br class="">wrote:<br class=""><br class="">Please use --with-mpi=/usr/local as the configure option. If you have<br class="">MPI pre-installed, either through PETSc or natively in your system,<br class="">always try to re-use that to maintain consistency. This will help<br class="">avoid mix-up of MPI implementations when you try to launch a parallel<br class="">job later.<br class=""><br class="">Let us know if the new configuration works.<br class=""><br class="">Vijay<br class=""><br class="">On Thu, May 12, 2016 at 6:32 PM, Jie Wu <<a href="mailto:jie.voo@gmail.com" class="">jie.voo@gmail.com</a>> wrote:<br class=""><br class="">Thank you very much for your reply. I search in my src/moab/MOABConfig.h<br class="">file, and I did not find the MOAB_HAVE_MPI. So I think the MOAB on my laptop<br class="">cannot run in parallel yet.<br class=""><br class="">I have this from my terminal.<br class=""><br class="">dyn-160-39-10-173:moab jiewu$ which mpic++<br class="">/usr/local/bin/mpic++<br class=""><br class="">But I cannot get anything by inputing which mpi. So I don’t know where the<br class="">mpi is in my laptop.<br class=""><br class="">dyn-160-39-10-173:moab jiewu$ which mpi<br class=""><br class="">I have petsc installed which has mpich, and linking that directory to the<br class="">MOAB configure option also did not work.<br class=""><br class="">Should I install mpi solely? I am afraid it might conflict with the one<br class="">installed in petsc.<br class=""><br class="">Is there any method that the mpi could be installed with MOAB automatically,<br class="">or maybe other proper manner? Thanks a lot!<br class=""><br class="">Best,<br class="">Jie<br class=""><br class=""><br class="">On May 12, 2016, at 6:11 PM, Vijay S. Mahadevan <<a href="mailto:vijay.m@gmail.com" class="">vijay.m@gmail.com</a>> wrote:<br class=""><br class="">Can you send the config.log. Looks like MPI is not getting enabled<br class="">(possibly), even with --dowload-mpich. Or you can check<br class="">src/moab/MOABConfig.h in your build directory and grep for<br class="">MOAB_HAVE_MPI.<br class=""><br class="">Vijay<br class=""><br class="">On Thu, May 12, 2016 at 4:46 PM, Jie Wu <<a href="mailto:jie.voo@gmail.com" class="">jie.voo@gmail.com</a>> wrote:<br class=""><br class="">Hi Iulian,<br class=""><br class="">Thanks for your reply. I think my configure is with mpi. Here is my<br class="">configure command<br class=""><br class="">dyn-160-39-10-173:moab jiewu$ ./configure --download-metis<br class="">--download-hdf5 --download-netcdf --download-mpich --enable-docs<br class="">--with-doxygen=/Applications/Doxygen.app/Contents/Resources/<br class=""><br class="">Best,<br class="">Jie<br class=""><br class="">On May 12, 2016, at 5:43 PM, Grindeanu, Iulian R. <<a href="mailto:iulian@mcs.anl.gov" class="">iulian@mcs.anl.gov</a>><br class="">wrote:<br class=""><br class="">Hi Jie,<br class="">Did you configure with mpi? What is your configure command?<br class=""><br class="">Iulian<br class="">________________________________<br class="">From: <a href="mailto:moab-dev-bounces@mcs.anl.gov" class="">moab-dev-bounces@mcs.anl.gov</a> [<a href="mailto:moab-dev-bounces@mcs.anl.gov" class="">moab-dev-bounces@mcs.anl.gov</a>] on behalf<br class="">of Jie Wu [<a href="mailto:jie.voo@gmail.com" class="">jie.voo@gmail.com</a>]<br class="">Sent: Thursday, May 12, 2016 4:23 PM<br class="">To: <a href="mailto:moab-dev@mcs.anl.gov" class="">moab-dev@mcs.anl.gov</a><br class="">Subject: [MOAB-dev] Error message when compiling the mbcoupler_test.cpp<br class=""><br class="">Hi all,<br class=""><br class="">My name is Jie and I am part of computational mechanics group at civil<br class="">engineering dept. of Columbia university.<br class=""><br class="">I am working on large deformation problems which may lead mesh distortions<br class="">and a re-meshing become necessary.<br class=""><br class="">I would like to compile mbcoupler_test.cpp to learn how it transfer the<br class="">variables from old to the new mesh.<br class=""><br class="">Now I can successfully compile the codes in build/examples and they works<br class="">good! But I cannot compile the codes in folder build/tools/mbcoupler by<br class="">following the instructions in build/tools/readme.tools, which shows error<br class="">message like following.<br class=""><br class="">Do you have any idea for this problem? Thanks a lot!<br class=""><br class="">Best,<br class="">Jie<br class=""><br class="">DataCoupler.cpp:136:25: error: member access into incomplete<br class=""> type 'moab::ParallelComm'<br class="">if (myPcomm && myPcomm->size() > 1) {<br class=""> ^<br class="">./DataCoupler.hpp:34:7: note: forward declaration of<br class=""> 'moab::ParallelComm'<br class="">class ParallelComm;<br class=""> ^<br class="">DataCoupler.cpp:161:12: error: member access into incomplete<br class=""> type 'moab::ParallelComm'<br class=""> myPcomm->proc_config().crystal_router()->gs_tran...<br class=""> ^<br class="">./DataCoupler.hpp:34:7: note: forward declaration of<br class=""> 'moab::ParallelComm'<br class="">class ParallelComm;<br class=""> ^<br class="">DataCoupler.cpp:187:12: error: member access into incomplete<br class=""> type 'moab::ParallelComm'<br class=""> myPcomm->proc_config().crystal_router()->gs_tran...<br class=""> ^<br class="">./DataCoupler.hpp:34:7: note: forward declaration of<br class=""> 'moab::ParallelComm'<br class="">class ParallelComm;<br class=""> ^<br class="">3 errors generated.<br class="">make[2]: *** [DataCoupler.lo] Error 1<br class="">make[2]: *** Waiting for unfinished jobs....<br class="">Coupler.cpp:344:45: error: no member named 'gs_transfer' in<br class=""> 'moab::gs_data::crystal_data'<br class=""> (myPc->proc_config().crystal_router())->gs_trans...<br class=""> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ^<br class="">Coupler.cpp:388:45: error: no member named 'gs_transfer' in<br class=""> 'moab::gs_data::crystal_data'<br class=""> (myPc->proc_config().crystal_router())->gs_trans...<br class=""> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ^<br class="">Coupler.cpp:611:45: error: no member named 'gs_transfer' in<br class=""> 'moab::gs_data::crystal_data'<br class=""> (myPc->proc_config().crystal_router())->gs_trans...<br class=""> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ^<br class="">Coupler.cpp:638:43: error: no member named 'gs_transfer' in<br class=""> 'moab::gs_data::crystal_data'<br class=""> myPc->proc_config().crystal_router()->gs_transfe...<br class=""> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ^<br class="">4 errors generated.<br class="">make[2]: *** [Coupler.lo] Error 1<br class="">make[1]: *** [all-recursive] Error 1<br class="">make: *** [all-recursive] Error 1<br class=""><br class=""><br class=""><br class=""><br class=""><br class=""><br class=""></blockquote></div></div></blockquote></div><br class=""></div></body></html>