[petsc-users] [beginner question] Different communicators in the two objects: Argument # 1 and 2 flag 3!

Niklas Fischer niklas at niklasfi.de
Tue Apr 22 06:48:12 CDT 2014


Am 22.04.2014 13:08, schrieb Jed Brown:
> Niklas Fischer <niklas at niklasfi.de> writes:
>
>> Hello,
>>
>> I have attached a small test case for a problem I am experiencing. What
>> this dummy program does is it reads a vector and a matrix from a text
>> file and then solves Ax=b. The same data is available in two forms:
>>   - everything is in one file (matops.s.0 and vops.s.0)
>>   - the matrix and vector are split between processes (matops.0,
>> matops.1, vops.0, vops.1)
>>
>> The serial version of the program works perfectly fine but unfortunately
>> errors occure, when running the parallel version:
>>
>> make && mpirun -n 2 a.out matops vops
>>
>> mpic++ -DPETSC_CLANGUAGE_CXX -isystem
>> /home/data/fischer/libs/petsc-3.4.3/arch-linux2-c-debug/include -isystem
>> /home/data/fischer/libs/petsc-3.4.3/include petsctest.cpp -Werror -Wall
>> -Wpedantic -std=c++11 -L
>> /home/data/fischer/libs/petsc-3.4.3/arch-linux2-c-debug/lib -lpetsc
>> /usr/bin/ld: warning: libmpi_cxx.so.0, needed by
>> /home/data/fischer/libs/petsc-3.4.3/arch-linux2-c-debug/lib/libpetsc.so,
>> may conflict with libmpi_cxx.so.1
>> /usr/bin/ld: warning: libmpi.so.0, needed by
>> /home/data/fischer/libs/petsc-3.4.3/arch-linux2-c-debug/lib/libpetsc.so,
>> may conflict with libmpi.so.1
>> librdmacm: couldn't read ABI version.
>> librdmacm: assuming: 4
>> CMA: unable to get RDMA device list
>> --------------------------------------------------------------------------
>> [[43019,1],0]: A high-performance Open MPI point-to-point messaging module
>> was unable to find any relevant network interfaces:
>>
>> Module: OpenFabrics (openib)
>>    Host: dornroeschen.igpm.rwth-aachen.de
>> CMA: unable to get RDMA device list
> It looks like your MPI is either broken or some of the code linked into
> your application was compiled with a different MPI or different version.
> Make sure you can compile and run simple MPI programs in parallel.
Hello Jed,

thank you for your inputs. Unfortunately MPI does not seem to be the 
issue here. The attachment contains a simple MPI hello world program 
which runs flawlessly (I will append the output to this mail) and I have 
not encountered any problems with other MPI programs. My question still 
stands.

Greetings,
Niklas Fischer

mpirun -np 2 ./mpitest

librdmacm: couldn't read ABI version.
librdmacm: assuming: 4
CMA: unable to get RDMA device list
--------------------------------------------------------------------------
[[44086,1],0]: A high-performance Open MPI point-to-point messaging module
was unable to find any relevant network interfaces:

Module: OpenFabrics (openib)
   Host: dornroeschen.igpm.rwth-aachen.de

Another transport will be used instead, although this may result in
lower performance.
--------------------------------------------------------------------------
librdmacm: couldn't read ABI version.
librdmacm: assuming: 4
CMA: unable to get RDMA device list
Hello world from processor dornroeschen.igpm.rwth-aachen.de, rank 0 out 
of 2 processors
Hello world from processor dornroeschen.igpm.rwth-aachen.de, rank 1 out 
of 2 processors
[dornroeschen.igpm.rwth-aachen.de:128141] 1 more process has sent help 
message help-mpi-btl-base.txt / btl:no-nics
[dornroeschen.igpm.rwth-aachen.de:128141] Set MCA parameter 
"orte_base_help_aggregate" to 0 to see all help / error messages


More information about the petsc-users mailing list