[petsc-users] issues with mpi uni

Janne Ruuskanen (TAU) janne.ruuskanen at tuni.fi
Tue Aug 24 04:47:23 CDT 2021


PETSc was built without mpi with the command:


./configure --with-openmp --with-mpi=0 --with-shared-libraries=1 --with-mumps-serial=1 --download-mumps --download-openblas --download-metis --download-slepc --with-debugging=0 --with-scalar-type=real --with-x=0 COPTFLAGS='-O3' CXXOPTFLAGS='-O3' FOPTFLAGS='-O3';

so the MPI_UNI  mpi wrapper of petsc collides in names with the actual MPI used to compile sparselizard.

-Janne


-----Original Message-----
From: Satish Balay <balay at mcs.anl.gov> 
Sent: Monday, August 23, 2021 4:45 PM
To: Janne Ruuskanen (TAU) <janne.ruuskanen at tuni.fi>
Cc: petsc-users at mcs.anl.gov
Subject: Re: [petsc-users] issues with mpi uni

Did you build PETSc with the same openmpi [as what sparselizard is built with]?

Satish

On Mon, 23 Aug 2021, Janne Ruuskanen (TAU) wrote:

> Hi,
> 
> Assumingly, I have an issue using petsc and openmpi together in my c++ code.
> 
> See the code there:
> https://github.com/halbux/sparselizard/blob/master/src/slmpi.cpp
> 
> 
> So when I run:
> 
> slmpi::initialize();
> slmpi::count();
> slmpi::finalize();
> 
> I get the following error:
> 
> 
> *** The MPI_Comm_size() function was called before MPI_INIT was invoked.
> *** This is disallowed by the MPI standard.
> *** Your MPI job will now abort.
> 
> 
> Have you experienced anything similar with people trying to link openmpi and petsc into the same executable?
> 
> Best regards,
> Janne Ruuskanen
> 



More information about the petsc-users mailing list