[petsc-users] issues with mpi uni

Satish Balay balay at mcs.anl.gov
Tue Aug 24 10:18:21 CDT 2021


MPI_UNI is name-spaced to avoid such conflicts. Don't know about mumps.

But there could be corner cases where this issue comes up.

And its  best to have the same MPI across all packages that go into a binary anyway.

Satish

On Tue, 24 Aug 2021, Matthew Knepley wrote:

> On Tue, Aug 24, 2021 at 5:47 AM Janne Ruuskanen (TAU) <
> janne.ruuskanen at tuni.fi> wrote:
> 
> > PETSc was built without mpi with the command:
> >
> >
> > ./configure --with-openmp --with-mpi=0 --with-shared-libraries=1
> > --with-mumps-serial=1 --download-mumps --download-openblas --download-metis
> > --download-slepc --with-debugging=0 --with-scalar-type=real --with-x=0
> > COPTFLAGS='-O3' CXXOPTFLAGS='-O3' FOPTFLAGS='-O3';
> >
> > so the MPI_UNI  mpi wrapper of petsc collides in names with the actual MPI
> > used to compile sparselizard.
> >
> 
> Different MPI implementations are not ABI compatible and therefore cannot
> be used in the same program. You must
> build all libraries in an executable with the same MPI. Thus, rebuild PETSc
> with the same MPI as saprselizard.
> 
>   Thanks,
> 
>      Matt
> 
> 
> > -Janne
> >
> >
> > -----Original Message-----
> > From: Satish Balay <balay at mcs.anl.gov>
> > Sent: Monday, August 23, 2021 4:45 PM
> > To: Janne Ruuskanen (TAU) <janne.ruuskanen at tuni.fi>
> > Cc: petsc-users at mcs.anl.gov
> > Subject: Re: [petsc-users] issues with mpi uni
> >
> > Did you build PETSc with the same openmpi [as what sparselizard is built
> > with]?
> >
> > Satish
> >
> > On Mon, 23 Aug 2021, Janne Ruuskanen (TAU) wrote:
> >
> > > Hi,
> > >
> > > Assumingly, I have an issue using petsc and openmpi together in my c++
> > code.
> > >
> > > See the code there:
> > > https://github.com/halbux/sparselizard/blob/master/src/slmpi.cpp
> > >
> > >
> > > So when I run:
> > >
> > > slmpi::initialize();
> > > slmpi::count();
> > > slmpi::finalize();
> > >
> > > I get the following error:
> > >
> > >
> > > *** The MPI_Comm_size() function was called before MPI_INIT was invoked.
> > > *** This is disallowed by the MPI standard.
> > > *** Your MPI job will now abort.
> > >
> > >
> > > Have you experienced anything similar with people trying to link openmpi
> > and petsc into the same executable?
> > >
> > > Best regards,
> > > Janne Ruuskanen
> > >
> >
> >
> 
> 



More information about the petsc-users mailing list