[petsc-users] How to use Intel OneApi mpi wrappers on Linux

Satish Balay balay at mcs.anl.gov
Mon Oct 3 10:25:56 CDT 2022


This is strange. It works for me [with is simple test]

[balay at pj01 ~]$ mpiicc -show
icc -I"/opt/intel/oneapi/mpi/2021.7.0/include" -L"/opt/intel/oneapi/mpi/2021.7.0/lib/release" -L"/opt/intel/oneapi/mpi/2021.7.0/lib" -Xlinker --enable-new-dtags -Xlinker -rpath -Xlinker "/opt/intel/oneapi/mpi/2021.7.0/lib/release" -Xlinker -rpath -Xlinker "/opt/intel/oneapi/mpi/2021.7.0/lib" -lmpifort -lmpi -ldl -lrt -lpthread
[balay at pj01 ~]$ export I_MPI_CC=icx
[balay at pj01 ~]$ mpiicc -show
icx -I"/opt/intel/oneapi/mpi/2021.7.0/include" -L"/opt/intel/oneapi/mpi/2021.7.0/lib/release" -L"/opt/intel/oneapi/mpi/2021.7.0/lib" -Xlinker --enable-new-dtags -Xlinker -rpath -Xlinker "/opt/intel/oneapi/mpi/2021.7.0/lib/release" -Xlinker -rpath -Xlinker "/opt/intel/oneapi/mpi/2021.7.0/lib" -lmpifort -lmpi -ldl -lrt -lpthread

Satish

On Mon, 3 Oct 2022, Barry Smith wrote:

> 
>    That is indeed disappointing. mpicc and mpiicc are simple scripts that select the compiler based on multiple criteria include the environmental variables so it is curious that this functionality does not work.
> 
>   Barry
> 
> 
> > On Oct 3, 2022, at 9:58 AM, Paolo Lampitella <paololampitella at hotmail.com> wrote:
> > 
> > Hi Barry,
> >  
> > thanks for the suggestion. I tried this but doesn’t seem to work as expected. That is, configure actually works, but it is because it is not seeing the LLVM based compilers, only the intel classical ones. Yet the variables seem correctly exported.
> >  
> > Paolo
> >  
> >  
> > Da: Barry Smith <mailto:bsmith at petsc.dev>
> > Inviato: lunedì 3 ottobre 2022 15:19
> > A: Paolo Lampitella <mailto:paololampitella at hotmail.com>
> > Cc: petsc-users at mcs.anl.gov <mailto:petsc-users at mcs.anl.gov>
> > Oggetto: Re: [petsc-users] How to use Intel OneApi mpi wrappers on Linux
> >  
> >  
> > bsmith at petsc-01:~$ mpicc
> > This script invokes an appropriate specialized C MPI compiler driver.
> > The following ways (priority order) can be used for changing default
> > compiler name (gcc):
> >    1. Command line option:  -cc=<compiler_name>
> >    2. Environment variable: I_MPI_CC (current value '')
> >    3. Environment variable: MPICH_CC (current value '')
> > 
> > 
> > So 
> > export I_MPI_CC=icx 
> > export I_MPI_CXX=icpx
> > export I_MPI_FC=ifx 
> >  
> > should do the trick.
> >  
> > 
> > 
> > On Oct 3, 2022, at 5:43 AM, Paolo Lampitella <paololampitella at hotmail.com <mailto:paololampitella at hotmail.com>> wrote:
> >  
> > Dear PETSc users and developers,
> >  
> > as per the title, I recently installed the base and HPC Intel OneApi toolkits on a machine running Ubuntu 20.04.
> >  
> > As you probably know, OneApi comes with the classical compilers (icc, icpc, ifort) and relative mpi wrappers (mpiicc, mpiicpc, mpiifort) as well as with the new LLVM based compilers (icx, icpx, ifx).
> >  
> > My experience so far with PETSc on Linux has been without troubles using both gcc compilers and either Mpich or OpenMPI and Intel classical compilers and MPI.
> >  
> > However, I have now troubles using the MPI wrappers of the new LLVM compilers as, in fact, there aren’t dedicated mpi wrappers for them. Instead, they can be used with certain flags for the classical wrappers:
> >  
> > mpiicc -cc=icx
> > mpiicpc -cxx=icpx
> > mpiifort -fc=ifx
> >  
> > The problem I have is that I have no idea how to pass them correctly to the configure and whatever comes after that.
> >  
> > Admittedly, I am just starting to use the new compilers, so I have no clue how I would use them in other projects as well.
> >  
> > I started with an alias in my .bash_aliases (which works for simple compilation tests from command line) but doesn’t with configure.
> >  
> > I also tried adding the flags to the COPTFLAGS, CXXOPTFLAGS and FOPTFLAGS but didn’t work as well.
> >  
> > Do you have any experience with the new Intel compilers and, in case, could you share hot to properly use them with MPI?
> >  
> > Thanks
> >  
> > Paolo
> 
> 


More information about the petsc-users mailing list