[MOAB-dev] Error message when compiling the mbcoupler_test.cpp

Vijay S. Mahadevan vijay.m at gmail.com
Fri May 13 10:54:39 CDT 2016


I also forgot to mention that you can be more explicit in the
specification of compilers during configuration, if you really want.
This would avoid headaches since the user specified wrappers will
directly be used. For example:

./configure --with-mpi=/usr/local CC=/usr/local/bin/mpicc
CXX=/usr/local/bin/mpic++ FC=/usr/local/bin/mpif90
F77=/usr/local/bin/mpif77 <OTHER_CONFIGURE_OPTIONS>

Vijay

On Fri, May 13, 2016 at 10:52 AM, Vijay S. Mahadevan <vijay.m at gmail.com> wrote:
> Please use --with-mpi=/usr/local as the configure option. If you have
> MPI pre-installed, either through PETSc or natively in your system,
> always try to re-use that to maintain consistency. This will help
> avoid mix-up of MPI implementations when you try to launch a parallel
> job later.
>
> Let us know if the new configuration works.
>
> Vijay
>
> On Thu, May 12, 2016 at 6:32 PM, Jie Wu <jie.voo at gmail.com> wrote:
>> Thank you very much for your reply. I search in my src/moab/MOABConfig.h
>> file, and I did not find the MOAB_HAVE_MPI. So I think the MOAB on my laptop
>> cannot run in parallel yet.
>>
>> I have this from my terminal.
>>
>> dyn-160-39-10-173:moab jiewu$ which mpic++
>> /usr/local/bin/mpic++
>>
>> But I cannot get anything by inputing which mpi. So I don’t know where the
>> mpi is in my laptop.
>>
>> dyn-160-39-10-173:moab jiewu$ which mpi
>>
>> I have petsc installed which has mpich, and linking that directory to the
>> MOAB configure option also did not work.
>>
>> Should I install mpi solely? I am afraid it might conflict with the one
>> installed in petsc.
>>
>> Is there any method that the mpi could be installed with MOAB automatically,
>> or maybe other proper manner? Thanks a lot!
>>
>> Best,
>> Jie
>>
>>
>> On May 12, 2016, at 6:11 PM, Vijay S. Mahadevan <vijay.m at gmail.com> wrote:
>>
>> Can you send the config.log. Looks like MPI is not getting enabled
>> (possibly), even with --dowload-mpich. Or you can check
>> src/moab/MOABConfig.h in your build directory and grep for
>> MOAB_HAVE_MPI.
>>
>> Vijay
>>
>> On Thu, May 12, 2016 at 4:46 PM, Jie Wu <jie.voo at gmail.com> wrote:
>>
>> Hi Iulian,
>>
>> Thanks for your reply. I think my configure is with mpi. Here is my
>> configure command
>>
>> dyn-160-39-10-173:moab jiewu$    ./configure --download-metis
>> --download-hdf5 --download-netcdf --download-mpich --enable-docs
>> --with-doxygen=/Applications/Doxygen.app/Contents/Resources/
>>
>> Best,
>> Jie
>>
>> On May 12, 2016, at 5:43 PM, Grindeanu, Iulian R. <iulian at mcs.anl.gov>
>> wrote:
>>
>> Hi Jie,
>> Did you configure with mpi? What is your configure command?
>>
>> Iulian
>> ________________________________
>> From: moab-dev-bounces at mcs.anl.gov [moab-dev-bounces at mcs.anl.gov] on behalf
>> of Jie Wu [jie.voo at gmail.com]
>> Sent: Thursday, May 12, 2016 4:23 PM
>> To: moab-dev at mcs.anl.gov
>> Subject: [MOAB-dev] Error message when compiling the mbcoupler_test.cpp
>>
>> Hi all,
>>
>> My name is Jie and I am part of computational mechanics group at civil
>> engineering dept. of Columbia university.
>>
>> I am working on large deformation problems which may lead mesh distortions
>> and a re-meshing become necessary.
>>
>> I would like to compile mbcoupler_test.cpp to learn how it transfer the
>> variables from old to the new mesh.
>>
>> Now I can successfully compile the codes in build/examples and they works
>> good! But I cannot compile the codes in folder build/tools/mbcoupler by
>> following the instructions in build/tools/readme.tools, which shows error
>> message like following.
>>
>> Do you have any idea for this problem? Thanks a lot!
>>
>> Best,
>> Jie
>>
>> DataCoupler.cpp:136:25: error: member access into incomplete
>>      type 'moab::ParallelComm'
>>  if (myPcomm && myPcomm->size() > 1) {
>>                        ^
>> ./DataCoupler.hpp:34:7: note: forward declaration of
>>      'moab::ParallelComm'
>> class ParallelComm;
>>      ^
>> DataCoupler.cpp:161:12: error: member access into incomplete
>>      type 'moab::ParallelComm'
>>    myPcomm->proc_config().crystal_router()->gs_tran...
>>           ^
>> ./DataCoupler.hpp:34:7: note: forward declaration of
>>      'moab::ParallelComm'
>> class ParallelComm;
>>      ^
>> DataCoupler.cpp:187:12: error: member access into incomplete
>>      type 'moab::ParallelComm'
>>    myPcomm->proc_config().crystal_router()->gs_tran...
>>           ^
>> ./DataCoupler.hpp:34:7: note: forward declaration of
>>      'moab::ParallelComm'
>> class ParallelComm;
>>      ^
>> 3 errors generated.
>> make[2]: *** [DataCoupler.lo] Error 1
>> make[2]: *** Waiting for unfinished jobs....
>> Coupler.cpp:344:45: error: no member named 'gs_transfer' in
>>      'moab::gs_data::crystal_data'
>>    (myPc->proc_config().crystal_router())->gs_trans...
>>    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~  ^
>> Coupler.cpp:388:45: error: no member named 'gs_transfer' in
>>      'moab::gs_data::crystal_data'
>>    (myPc->proc_config().crystal_router())->gs_trans...
>>    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~  ^
>> Coupler.cpp:611:45: error: no member named 'gs_transfer' in
>>      'moab::gs_data::crystal_data'
>>    (myPc->proc_config().crystal_router())->gs_trans...
>>    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~  ^
>> Coupler.cpp:638:43: error: no member named 'gs_transfer' in
>>      'moab::gs_data::crystal_data'
>>    myPc->proc_config().crystal_router()->gs_transfe...
>>    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~  ^
>> 4 errors generated.
>> make[2]: *** [Coupler.lo] Error 1
>> make[1]: *** [all-recursive] Error 1
>> make: *** [all-recursive] Error 1
>>
>>
>>
>>


More information about the moab-dev mailing list