[MOAB-dev] Error message when compiling the mbcoupler_test.cpp
Jie Wu
jie.voo at gmail.com
Fri May 13 18:12:56 CDT 2016
Hi Iulian,
Make clean for folders moab src and test works! Now all the tests have passed! Thank you so much for your help!
Best,
Jie
> On May 13, 2016, at 6:16 PM, Grindeanu, Iulian R. <iulian at mcs.anl.gov> wrote:
>
> Another possible cause is that you do not have a clean build.
> can you do a make clean in your moab src and test folder, and then make and make check again. (this assumes that your configuration was successful already, as you reported)
> I cannot explain why your MPI calls would have problems
> ________________________________________
> From: Jie Wu [jie.voo at gmail.com]
> Sent: Friday, May 13, 2016 3:41 PM
> To: Vijay S. Mahadevan
> Cc: Grindeanu, Iulian R.; moab-dev at mcs.anl.gov
> Subject: Re: [MOAB-dev] Error message when compiling the mbcoupler_test.cpp
>
> Hi Vijay,
>
> Thank you so much for your help! 'Make install’ works good on my laptop.
>
> I am looking forward to seeing the possible causes of the failing tests.
>
> Best,
> Jie
>
>
>> On May 13, 2016, at 4:08 PM, Vijay S. Mahadevan <vijay.m at gmail.com> wrote:
>>
>> Ok, looks like your MPI installation and MOAB+MPI are configured and
>> working fine. We will look at the tests and see why they are failing.
>> I haven't been able to replicate the errors locally but I think we
>> should be able to find the issue soon.
>>
>> If "make install" works (you should configure with a
>> --prefix=$INSTALL_DIR option), you can start using the library now.
>> I'll update you once we find the root cause for the issues you found.
>>
>> Vijay
>>
>> On Fri, May 13, 2016 at 2:12 PM, Jie Wu <jie.voo at gmail.com> wrote:
>>> Hi there,
>>>
>>> Thank you for your directions! Here is the results I get after issuing make
>>> -k check at directory test/parallel, and I think they look good.
>>>
>>> dyn-160-39-10-173:parallel jiewu$ make -k check
>>> /Applications/Xcode.app/Contents/Developer/usr/bin/make pcomm_unit scdtest
>>> pcomm_serial par_spatial_locator_test parallel_unit_tests scdpart
>>> read_nc_par ucdtrvpart mpastrvpart gcrm_par parallel_hdf5_test
>>> mhdf_parallel parallel_write_test uber_parallel_test parallel_adj
>>> parmerge_test augment_with_ghosts imeshp_test mbparallelcomm_test partcheck
>>> structured3 parmerge
>>> CXX pcomm_unit.o
>>> CXXLD pcomm_unit
>>> CXX scdtest.o
>>> CXXLD scdtest
>>> CXX pcomm_serial.o
>>> CXXLD pcomm_serial
>>> CXX par_spatial_locator_test.o
>>> CXXLD par_spatial_locator_test
>>> CXX parallel_unit_tests.o
>>> CXXLD parallel_unit_tests
>>> CXX scdpart.o
>>> CXXLD scdpart
>>> CXX ../io/read_nc.o
>>> CXXLD read_nc_par
>>> CXX ucdtrvpart.o
>>> CXXLD ucdtrvpart
>>> CXX mpastrvpart.o
>>> CXXLD mpastrvpart
>>> CXX gcrm_par.o
>>> CXXLD gcrm_par
>>> CXX parallel_hdf5_test.o
>>> CXXLD parallel_hdf5_test
>>> CC mhdf_parallel.o
>>> CCLD mhdf_parallel
>>> CXX parallel_write_test.o
>>> CXXLD parallel_write_test
>>> CXX uber_parallel_test.o
>>> CXXLD uber_parallel_test
>>> CXX ../adj_moab_test.o
>>> CXXLD parallel_adj
>>> CXX parmerge_test.o
>>> CXXLD parmerge_test
>>> CXX augment_with_ghosts.o
>>> CXXLD augment_with_ghosts
>>> PPFC imeshp_test.o
>>> FCLD imeshp_test
>>> CXX mbparallelcomm_test.o
>>> CXXLD mbparallelcomm_test
>>> CXX partcheck.o
>>> CXXLD partcheck
>>> CXX structured3.o
>>> CXXLD structured3
>>> CXX parmerge.o
>>> CXXLD parmerge
>>> /Applications/Xcode.app/Contents/Developer/usr/bin/make check-TESTS
>>> PASS: pcomm_unit
>>> PASS: scdtest
>>> PASS: pcomm_serial
>>> PASS: par_spatial_locator_test
>>> PASS: parallel_unit_tests
>>> PASS: scdpart
>>> PASS: read_nc_par
>>> PASS: ucdtrvpart
>>> PASS: mpastrvpart
>>> PASS: gcrm_par
>>> PASS: parallel_hdf5_test
>>> PASS: mhdf_parallel
>>> PASS: parallel_write_test
>>> PASS: uber_parallel_test
>>> PASS: parallel_adj
>>> PASS: parmerge_test
>>> PASS: augment_with_ghosts
>>> PASS: imeshp_test
>>> ============================================================================
>>> Testsuite summary for MOAB 4.9.2pre
>>> ============================================================================
>>> # TOTAL: 18
>>> # PASS: 18
>>> # SKIP: 0
>>> # XFAIL: 0
>>> # FAIL: 0
>>> # XPASS: 0
>>> # ERROR: 0
>>> ============================================================================
>>>
>>> Following is an example I run with mpiexec, and I think it also goes well.
>>>
>>> Thank you very much for your help! Any help deal with the failing tests will
>>> be much appreciated.
>>>
>>> Best,
>>> Jie
>>>
>>> dyn-160-39-10-173:mbcoupler jiewu$ mpiexec -np 2 mbcoupler_test -meshes
>>> /Users/wujie/SourceCode/moab/MeshFiles/unittest/64bricks_1khex.h5m
>>> /Users/wujie/SourceCode/moab/MeshFiles/unittest/64bricks_12ktet.h5m -itag
>>> vertex_field -outfile dum.h5m
>>> ReadHDF5: 0.0324039
>>> get set meta 0.00103188
>>> partial subsets 0.000803947
>>> partition time 2.14577e-06
>>> get set ids 0.000556946
>>> set contents 0.00107098
>>> polyhedra 2.5034e-05
>>> elements 0.00174189
>>> nodes 0.0014832
>>> node adjacency 0
>>> side elements 0.000672817
>>> update connectivity 0.000102043
>>> adjacency 0
>>> delete non_adj 0.00338697
>>> recursive sets 0
>>> find contain_sets 0.000369072
>>> read sets 0.00147104
>>> read tags 0.01667
>>> store file ids 0.00191593
>>> read qa records 0.000133038
>>> Parallel Read times:
>>> 0.0473361 PARALLEL READ PART
>>> 0.0138111 PARALLEL RESOLVE_SHARED_ENTS
>>> 0.0113361 PARALLEL EXCHANGE_GHOSTS
>>> 0.00113177 PARALLEL RESOLVE_SHARED_SETS
>>> 0.00648403 PARALLEL_AUGMENT_SETS_WITH_GHOSTS
>>> 0.079994 PARALLEL TOTAL
>>> ReadHDF5: 0.0714378
>>> get set meta 0.000261784
>>> partial subsets 0.000660181
>>> partition time 9.53674e-07
>>> get set ids 0.00101304
>>> set contents 0.000728846
>>> polyhedra 0.000128031
>>> elements 0.0155201
>>> nodes 0.00354886
>>> node adjacency 6.19888e-06
>>> side elements 0.00422001
>>> update connectivity 0.00184798
>>> adjacency 0
>>> delete non_adj 0.0100739
>>> recursive sets 0
>>> find contain_sets 0.00218511
>>> read sets 0.00258899
>>> read tags 0.0235598
>>> store file ids 0.0044961
>>> read qa records 9.5129e-05
>>> Parallel Read times:
>>> 0.0779119 PARALLEL READ PART
>>> 0.0667319 PARALLEL RESOLVE_SHARED_ENTS
>>> 0.0397439 PARALLEL EXCHANGE_GHOSTS
>>> 0.00435901 PARALLEL RESOLVE_SHARED_SETS
>>> 0.0243261 PARALLEL_AUGMENT_SETS_WITH_GHOSTS
>>> 0.209468 PARALLEL TOTAL
>>> Proc 1 iface entities:
>>> 446 0d iface entities.
>>> 1080 1d iface entities.
>>> Proc 0636 2d iface entities.
>>> 0 3d iface entities.
>>> iface entities:
>>> 446 0d iface entities.
>>> 108 (446 verts adj to other iface ents)
>>> 0 1d iface entities.
>>> 636 2d iface entities.
>>> 0 3d iface entities.
>>> (446 verts adj to other iface ents)
>>> rank 0 points of interest: 6155
>>> rank 1 points of interest: 6155
>>>
>>> Max time : 0.101124 0.0461781 0.00104499 (inst loc interp -- 2 procs)
>>> WriteHDF5: 0.0779669
>>> gather mesh: 0.00237203
>>> create file: 0.0655501
>>> create nodes: 0.00201201
>>> negotiate types: 2.19345e-05
>>> create elem: 0.000785828
>>> file id exch: 0.0234561
>>> create adj: 0.000414848
>>> create set: 0.019654
>>> shared ids: 6.19888e-05
>>> shared data: 0.014715
>>> set offsets: 0.00447798
>>> create tags: 0.0160849
>>> coordinates: 0.00199103
>>> connectivity: 0.00162005
>>> sets: 0.0010829
>>> set descrip: 0.00019598
>>> set content: 0.000527859
>>> set parent: 0.000226021
>>> set child: 0.000128984
>>> adjacencies: 5.96046e-06
>>> tags: 0.00700402
>>> dense data: 0.00127792
>>> sparse data: 0.00569224
>>> var-len data: 0
>>> Wrote dum.h5m
>>> mbcoupler_test complete.
>>>
>>>
>>>
>>> On May 13, 2016, at 2:58 PM, Grindeanu, Iulian R. <iulian at mcs.anl.gov>
>>> wrote:
>>>
>>> yes, can you try make -k check ?
>>> It will not stop at the first failed test, it will try to test everything
>>> ________________________________________
>>> From: Vijay S. Mahadevan [vijay.m at gmail.com]
>>> Sent: Friday, May 13, 2016 1:53 PM
>>> To: Jie Wu
>>> Cc: Grindeanu, Iulian R.; moab-dev at mcs.anl.gov
>>> Subject: Re: [MOAB-dev] Error message when compiling the mbcoupler_test.cpp
>>>
>>> Jie,
>>>
>>> We are looking into the failing tests and will update in case there is
>>> a bug in the code. Our nightly tests are running cleanly and so I'm
>>> curious to see if other tests are failing. Can you go to test/parallel
>>> and run "make check" there to verify if the parallel job launch is
>>> working correctly ? Also, can you verify if you can run MPI programs
>>> with mpiexec ?
>>>
>>> Vijay
>>>
>>>
>>>
>>> Vijay
>>>
>>> On Fri, May 13, 2016 at 1:31 PM, Jie Wu <jie.voo at gmail.com> wrote:
>>>
>>> Hi Vijay,
>>>
>>> Deleting the auto-installed packages under sandbox before issue new
>>> configurations works good! No error message emerged until I input the
>>> command make check, which is as following.
>>>
>>> Do you have any idea for this problem?
>>>
>>> Best,
>>> Jie
>>>
>>> PASS: reorder_test
>>> FAIL: test_prog_opt
>>> PASS: coords_connect_iterate
>>> PASS: elem_eval_test
>>> PASS: spatial_locator_test
>>> PASS: test_boundbox
>>> PASS: adj_moab_test
>>> PASS: uref_mesh_test
>>> PASS: verdict_test
>>> PASS: test_Matrix3
>>> PASS: mbfacet_test
>>> PASS: gttool_test
>>> PASS: cropvol_test
>>> FAIL: mergemesh_test
>>> PASS: mbground_test
>>> PASS: lloyd_smoother_test
>>> ============================================================================
>>> Testsuite summary for MOAB 4.9.2pre
>>> ============================================================================
>>> # TOTAL: 36
>>> # PASS: 34
>>> # SKIP: 0
>>> # XFAIL: 0
>>> # FAIL: 2
>>> # XPASS: 0
>>> # ERROR: 0
>>> ============================================================================
>>> See test/test-suite.log
>>> Please report to moab-dev at mcs.anl.gov
>>> ============================================================================
>>> make[4]: *** [test-suite.log] Error 1
>>> make[3]: *** [check-TESTS] Error 2
>>> make[2]: *** [check-am] Error 2
>>> make[1]: *** [check-recursive] Error 1
>>> make: *** [check-recursive] Error 1
>>>
>>>
>>>
>>>
>>> On May 13, 2016, at 1:08 PM, Vijay S. Mahadevan <vijay.m at gmail.com> wrote:
>>>
>>> Jie, since you now changed the compilers used to configure the stack,
>>> I would advise deleting the auto-installed packages under sandbox
>>> directory and then re-configuring. This will force the rebuild of
>>> HDF5/NetCDF packages with the parallel MPI compilers and you should
>>> get the HDF5-parallel support enabled after that. Makes sense ?
>>>
>>> Vijay
>>>
>>> On Fri, May 13, 2016 at 11:27 AM, Jie Wu <jie.voo at gmail.com> wrote:
>>>
>>> Hi Vijay,
>>>
>>> Thank you for your reply. The new configuration do works. Here shows the
>>> command I inputed and the message while I input: make check.
>>>
>>> dyn-160-39-10-173:moab jiewu$ ./configure --with-mpi=/usr/local
>>> CC=/usr/local/bin/mpicc CXX=/usr/local/bin/mpic++ FC=/usr/local/bin/mpif90
>>> F77=/usr/local/bin/mpif77 --download-metis --download-hdf5 --download-netcdf
>>> --enable-docs --with-doxygen=/Applications/Doxygen.app/Contents/Resources/
>>>
>>> The results are
>>>
>>> configure: WARNING:
>>> *************************************************************************
>>> * MOAB has been configured with parallel and HDF5 support
>>> * but the configured HDF5 library does not support parallel IO.
>>> * Some parallel IO capabilities will be disabled.
>>> *************************************************************************
>>>
>>> After that, I inputted the command, make -j4, and then command make check.
>>> Here are the results it shows
>>>
>>> /Applications/Xcode.app/Contents/Developer/usr/bin/make check-TESTS
>>> PASS: elem_util_test
>>> PASS: element_test
>>> FAIL: datacoupler_test
>>> FAIL: mbcoupler_test
>>> FAIL: parcoupler_test
>>> ============================================================================
>>> Testsuite summary for MOAB 4.9.2pre
>>> ============================================================================
>>> # TOTAL: 5
>>> # PASS: 2
>>> # SKIP: 0
>>> # XFAIL: 0
>>> # FAIL: 3
>>> # XPASS: 0
>>> # ERROR: 0
>>> ============================================================================
>>> See tools/mbcoupler/test-suite.log
>>> Please report to moab-dev at mcs.anl.gov
>>> ============================================================================
>>> make[4]: *** [test-suite.log] Error 1
>>> make[3]: *** [check-TESTS] Error 2
>>> make[2]: *** [check-am] Error 2
>>> make[1]: *** [check-recursive] Error 1
>>> make: *** [check-recursive] Error 1
>>>
>>> Then I was trying to run the mbcouple_test with command
>>> dyn-160-39-10-173:mbcoupler jiewu$ mpiexec -np 2 mbcoupler_test -meshes
>>> /Users/wujie/SourceCode/moab/MeshFiles/unittest/64bricks_1khex.h5m
>>> /Users/wujie/SourceCode/moab/MeshFiles/unittest/64bricks_12ktet.h5m -itag
>>> vertex_field -outfile dum.h5m
>>> Now the results look like,
>>>
>>> [0]MOAB ERROR: --------------------- Error Message
>>> ------------------------------------
>>> [0]MOAB ERROR: MOAB not configured with parallel HDF5 support!
>>> [0]MOAB ERROR: set_up_read() line 355 in src/io/ReadHDF5.cpp
>>> [0]MOAB ERROR: --------------------- Error Message
>>> ------------------------------------
>>> [0]MOAB ERROR: NULL file handle.!
>>> [0]MOAB ERROR: is_error() line 133 in src/io/ReadHDF5.hpp
>>> [1]MOAB ERROR: --------------------- Error Message
>>> ------------------------------------
>>> [1]MOAB ERROR: MOAB not configured with parallel HDF5 support!
>>> [1]MOAB ERROR: set_up_read() line 355 in src/io/ReadHDF5.cpp
>>> [1]MOAB ERROR: --------------------- Error Message
>>> ------------------------------------
>>> [1]MOAB ERROR: NULL file handle.!
>>> [1]MOAB ERROR: is_error() line 133 in src/io/ReadHDF5.hpp
>>> [0]MOAB ERROR: --------------------- Error Message
>>> ------------------------------------
>>> [0]MOAB ERROR: Failed to load file after trying all possible readers!
>>> [0]MOAB ERROR: serial_load_file() line 635 in src/Core.cpp
>>> [0]MOAB ERROR: --------------------- Error Message
>>> ------------------------------------
>>> [0]MOAB ERROR: Failed in step PARALLEL READ PART!
>>> [0]MOAB ERROR: load_file() line 553 in src/parallel/ReadParallel.cpp
>>> [0]MOAB ERROR: load_file() line 252 in src/parallel/ReadParallel.cpp
>>> [0]MOAB ERROR: load_file() line 515 in src/Core.cpp
>>> [1]MOAB ERROR: --------------------- Error Message
>>> ------------------------------------
>>> [1]MOAB ERROR: Failed to load file after trying all possible readers!
>>> [1]MOAB ERROR: serial_load_file() line 635 in src/Core.cpp
>>> [1]MOAB ERROR: --------------------- Error Message
>>> ------------------------------------
>>> [1]MOAB ERROR: Failed in step PARALLEL READ PART!
>>> [1]MOAB ERROR: load_file() line 553 in src/parallel/ReadParallel.cpp
>>> [0]MOAB ERROR: main() line 172 in mbcoupler_test.cpp
>>> [1]MOAB ERROR: load_file() line 252 in src/parallel/ReadParallel.cpp
>>> [1]MOAB ERROR: load_file() line 515 in src/Core.cpp
>>> [1]MOAB ERROR: main() line 172 in mbcoupler_test.cpp
>>> --------------------------------------------------------------------------
>>> MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
>>> with errorcode 16.
>>>
>>> NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
>>> You may or may not see output from other processes, depending on
>>> exactly when Open MPI kills them.
>>> --------------------------------------------------------------------------
>>> [dyn-160-39-11-172.dyn.columbia.edu:26361] 1 more process has sent help
>>> message help-mpi-api.txt / mpi-abort
>>> [dyn-160-39-11-172.dyn.columbia.edu:26361] Set MCA parameter
>>> "orte_base_help_aggregate" to 0 to see all help / error messages
>>> The other excitable file in mbcoupler folder, like ./element_test now can
>>> run successfully.
>>>
>>> Any direction for this was deeply appreciated!
>>>
>>> Best,
>>> Jie
>>>
>>>
>>> On May 13, 2016, at 11:54 AM, Vijay S. Mahadevan <vijay.m at gmail.com> wrote:
>>>
>>> I also forgot to mention that you can be more explicit in the
>>> specification of compilers during configuration, if you really want.
>>> This would avoid headaches since the user specified wrappers will
>>> directly be used. For example:
>>>
>>> ./configure --with-mpi=/usr/local CC=/usr/local/bin/mpicc
>>> CXX=/usr/local/bin/mpic++ FC=/usr/local/bin/mpif90
>>> F77=/usr/local/bin/mpif77 <OTHER_CONFIGURE_OPTIONS>
>>>
>>> Vijay
>>>
>>> On Fri, May 13, 2016 at 10:52 AM, Vijay S. Mahadevan <vijay.m at gmail.com>
>>> wrote:
>>>
>>> Please use --with-mpi=/usr/local as the configure option. If you have
>>> MPI pre-installed, either through PETSc or natively in your system,
>>> always try to re-use that to maintain consistency. This will help
>>> avoid mix-up of MPI implementations when you try to launch a parallel
>>> job later.
>>>
>>> Let us know if the new configuration works.
>>>
>>> Vijay
>>>
>>> On Thu, May 12, 2016 at 6:32 PM, Jie Wu <jie.voo at gmail.com> wrote:
>>>
>>> Thank you very much for your reply. I search in my src/moab/MOABConfig.h
>>> file, and I did not find the MOAB_HAVE_MPI. So I think the MOAB on my laptop
>>> cannot run in parallel yet.
>>>
>>> I have this from my terminal.
>>>
>>> dyn-160-39-10-173:moab jiewu$ which mpic++
>>> /usr/local/bin/mpic++
>>>
>>> But I cannot get anything by inputing which mpi. So I don’t know where the
>>> mpi is in my laptop.
>>>
>>> dyn-160-39-10-173:moab jiewu$ which mpi
>>>
>>> I have petsc installed which has mpich, and linking that directory to the
>>> MOAB configure option also did not work.
>>>
>>> Should I install mpi solely? I am afraid it might conflict with the one
>>> installed in petsc.
>>>
>>> Is there any method that the mpi could be installed with MOAB automatically,
>>> or maybe other proper manner? Thanks a lot!
>>>
>>> Best,
>>> Jie
>>>
>>>
>>> On May 12, 2016, at 6:11 PM, Vijay S. Mahadevan <vijay.m at gmail.com> wrote:
>>>
>>> Can you send the config.log. Looks like MPI is not getting enabled
>>> (possibly), even with --dowload-mpich. Or you can check
>>> src/moab/MOABConfig.h in your build directory and grep for
>>> MOAB_HAVE_MPI.
>>>
>>> Vijay
>>>
>>> On Thu, May 12, 2016 at 4:46 PM, Jie Wu <jie.voo at gmail.com> wrote:
>>>
>>> Hi Iulian,
>>>
>>> Thanks for your reply. I think my configure is with mpi. Here is my
>>> configure command
>>>
>>> dyn-160-39-10-173:moab jiewu$ ./configure --download-metis
>>> --download-hdf5 --download-netcdf --download-mpich --enable-docs
>>> --with-doxygen=/Applications/Doxygen.app/Contents/Resources/
>>>
>>> Best,
>>> Jie
>>>
>>> On May 12, 2016, at 5:43 PM, Grindeanu, Iulian R. <iulian at mcs.anl.gov>
>>> wrote:
>>>
>>> Hi Jie,
>>> Did you configure with mpi? What is your configure command?
>>>
>>> Iulian
>>> ________________________________
>>> From: moab-dev-bounces at mcs.anl.gov [moab-dev-bounces at mcs.anl.gov] on behalf
>>> of Jie Wu [jie.voo at gmail.com]
>>> Sent: Thursday, May 12, 2016 4:23 PM
>>> To: moab-dev at mcs.anl.gov
>>> Subject: [MOAB-dev] Error message when compiling the mbcoupler_test.cpp
>>>
>>> Hi all,
>>>
>>> My name is Jie and I am part of computational mechanics group at civil
>>> engineering dept. of Columbia university.
>>>
>>> I am working on large deformation problems which may lead mesh distortions
>>> and a re-meshing become necessary.
>>>
>>> I would like to compile mbcoupler_test.cpp to learn how it transfer the
>>> variables from old to the new mesh.
>>>
>>> Now I can successfully compile the codes in build/examples and they works
>>> good! But I cannot compile the codes in folder build/tools/mbcoupler by
>>> following the instructions in build/tools/readme.tools, which shows error
>>> message like following.
>>>
>>> Do you have any idea for this problem? Thanks a lot!
>>>
>>> Best,
>>> Jie
>>>
>>> DataCoupler.cpp:136:25: error: member access into incomplete
>>> type 'moab::ParallelComm'
>>> if (myPcomm && myPcomm->size() > 1) {
>>> ^
>>> ./DataCoupler.hpp:34:7: note: forward declaration of
>>> 'moab::ParallelComm'
>>> class ParallelComm;
>>> ^
>>> DataCoupler.cpp:161:12: error: member access into incomplete
>>> type 'moab::ParallelComm'
>>> myPcomm->proc_config().crystal_router()->gs_tran...
>>> ^
>>> ./DataCoupler.hpp:34:7: note: forward declaration of
>>> 'moab::ParallelComm'
>>> class ParallelComm;
>>> ^
>>> DataCoupler.cpp:187:12: error: member access into incomplete
>>> type 'moab::ParallelComm'
>>> myPcomm->proc_config().crystal_router()->gs_tran...
>>> ^
>>> ./DataCoupler.hpp:34:7: note: forward declaration of
>>> 'moab::ParallelComm'
>>> class ParallelComm;
>>> ^
>>> 3 errors generated.
>>> make[2]: *** [DataCoupler.lo] Error 1
>>> make[2]: *** Waiting for unfinished jobs....
>>> Coupler.cpp:344:45: error: no member named 'gs_transfer' in
>>> 'moab::gs_data::crystal_data'
>>> (myPc->proc_config().crystal_router())->gs_trans...
>>> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ^
>>> Coupler.cpp:388:45: error: no member named 'gs_transfer' in
>>> 'moab::gs_data::crystal_data'
>>> (myPc->proc_config().crystal_router())->gs_trans...
>>> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ^
>>> Coupler.cpp:611:45: error: no member named 'gs_transfer' in
>>> 'moab::gs_data::crystal_data'
>>> (myPc->proc_config().crystal_router())->gs_trans...
>>> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ^
>>> Coupler.cpp:638:43: error: no member named 'gs_transfer' in
>>> 'moab::gs_data::crystal_data'
>>> myPc->proc_config().crystal_router()->gs_transfe...
>>> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ^
>>> 4 errors generated.
>>> make[2]: *** [Coupler.lo] Error 1
>>> make[1]: *** [all-recursive] Error 1
>>> make: *** [all-recursive] Error 1
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>
More information about the moab-dev
mailing list