[petsc-users] Using PETSC with an openMP program

Adrián Amor aamor at pa.uc3m.es
Fri Mar 2 13:04:38 CST 2018


Thanks Satish, I tried the procedure you suggested and I get the same
performance, so I guess that MKL is not a problem in this case (I agree
with you that it has to be improved though... my makefile is a little
chaotic with all the libraries that I use).

And thanks Barry and Matthew! I'll try to ask to the Intel compiler forum
since I also think that this is a problem related to the compiler and if I
make some advance I'll let you know! In the end, I guess I'll drop
acceleration through OpenMP threads...

Thanks all!

Adrian.

2018-03-02 17:11 GMT+01:00 Satish Balay <balay at mcs.anl.gov>:

> When using MKL - PETSc attempts to default to sequential MKL.
>
> Perhaps this pulls in a *conflicting* dependency against -liomp5 - and
> one has to use threaded MKL for this case. i.e not use
> -lmkl_sequential
>
> You appear to have multiple mkl libraires linked in - its not clear
> what they are for - and if there are any conflicts there.
>
> > -L/opt/intel/compilers_and_libraries_2016.1.150/linux/mkl/lib/intel64
> > -lmkl_scalapack_lp64 -lmkl_blacs_intelmpi_lp64 -lpetsc -lmkl_intel_lp64
> > -lmkl_intel_thread -lmkl_core -lmkl_lapack95_lp64 -liomp5 -lpthread -lm
>
> > -lmkl_intel_lp64 lmkl_sequential -lmkl_core -lpthread
>
> To test this out - suggest rebuilding PETSc with
> --download-fblaslapack [and no mkl or related pacakges] - and then run
> this test case you have [with openmp]
>
> And then add back one mkl package at a time..
>
> Satish
>
>
> On Fri, 2 Mar 2018, Adrián Amor wrote:
>
> > Hi all,
> >
> > I have been working in the last months with PETSC in a FEM program
> written
> > on FORTRAN, so far sequential. Now, I want to parallelize it with OpenMP
> > and I have found some problems. Finally, I have built a mockup program
> > trying to localize the error.
> >
> > 1. I have compiled PETSC with these options:
> > ./configure --with-cc=mpiicc --with-cxx=mpiicpc --with-fc=mpiifort
> > --with-blas-lapack-dir=/opt/intel/mkl/lib/intel64/ --with-debugging=1
> > --with-scalar-type=complex --with-threadcomm --with-pthreadclasses
> > --with-openmp
> > --with-openmp-include=/opt/intel/compilers_and_libraries_
> 2016.1.150/linux/compiler/lib/intel64_lin
> > --with-openmp-lib=/opt/intel/compilers_and_libraries_2016.
> 1.150/linux/compiler/lib/intel64_lin/libiomp5.a
> > PETSC_ARCH=linux-intel-dbg PETSC-AVOID-MPIF-H=1
> >
> > (I have tried also removing   --with-threadcomm --with-pthreadclasses and
> > with libiomp5.so).
> >
> > 2. The program to be executed is composed of two files, one is
> > hellocount.F90:
> > MODULE hello_count
> >   use omp_lib
> >   IMPLICIT none
> >
> >   CONTAINS
> >   subroutine hello_print ()
> >      integer :: nthreads,mythread
> >
> >    !pragma hello-who-omp-f
> >    !$omp parallel
> >      nthreads = omp_get_num_threads()
> >      mythread = omp_get_thread_num()
> >      write(*,'("Hello from",i3," out of",i3)') mythread,nthreads
> >    !$omp end parallel
> >    !pragma end
> >    end subroutine hello_print
> > END MODULE hello_count
> >
> > and the other one is hellocount_main.F90:
> > Program Hello
> >
> >    USE hello_count
> >
> >    call hello_print
> >
> >    STOP
> >
> > end Program Hello
> >
> > 3. To compile these two functions I use:
> > rm -rf _obj
> > mkdir _obj
> >
> > ifort -E -I/home/aamor/petsc/include
> > -I/home/aamor/petsc/linux-intel-dbg/include -c hellocount.F90
> > >_obj/hellocount.f90
> > ifort -E -I/home/aamor/petsc/include
> > -I/home/aamor/petsc/linux-intel-dbg/include -c hellocount_main.F90
> > >_obj/hellocount_main.f90
> >
> > mpiifort -CB -g -warn all -O0 -shared-intel -check:none -qopenmp -module
> > _obj -I./_obj -I/home/aamor/MUMPS_5.1.2/include
> >  -I/opt/intel/compilers_and_libraries_2016.1.150/linux/mkl/include
> > -I/opt/intel/compilers_and_libraries_2016.1.150/linux/
> mkl/include/intel64/lp64/
> > -I/home/aamor/petsc/include -I/home/aamor/petsc/linux-intel-dbg/include
> -o
> > _obj/hellocount.o -c _obj/hellocount.f90
> > mpiifort -CB -g -warn all -O0 -shared-intel -check:none -qopenmp -module
> > _obj -I./_obj -I/home/aamor/MUMPS_5.1.2/include
> >  -I/opt/intel/compilers_and_libraries_2016.1.150/linux/mkl/include
> > -I/opt/intel/compilers_and_libraries_2016.1.150/linux/
> mkl/include/intel64/lp64/
> > -I/home/aamor/petsc/include -I/home/aamor/petsc/linux-intel-dbg/include
> -o
> > _obj/hellocount_main.o -c _obj/hellocount_main.f90
> >
> > mpiifort -CB -g -warn all -O0 -shared-intel -check:none -qopenmp -module
> > _obj -I./_obj -o exec/HELLO _obj/hellocount.o _obj/hellocount_main.o
> > /home/aamor/lib_tmp/libarpack_LinuxIntel15.a
> > /home/aamor/MUMPS_5.1.2/lib/libzmumps.a
> > /home/aamor/MUMPS_5.1.2/lib/libmumps_common.a
> > /home/aamor/MUMPS_5.1.2/lib/libpord.a
> > /home/aamor/parmetis-4.0.3/lib/libparmetis.a
> > /home/aamor/parmetis-4.0.3/lib/libmetis.a
> > -L/opt/intel/compilers_and_libraries_2016.1.150/linux/mkl/lib/intel64
> > -lmkl_scalapack_lp64 -lmkl_blacs_intelmpi_lp64 -lpetsc -lmkl_intel_lp64
> > -lmkl_intel_thread -lmkl_core -lmkl_lapack95_lp64 -liomp5 -lpthread -lm
> > -L/home/aamor/lib_tmp -lgidpost -lz /home/aamor/lua-5.3.3/src/liblua.a
> > /home/aamor/ESEAS-master/libeseas.a
> > -Wl,-rpath,/home/aamor/petsc/linux-intel-dbg/lib
> > -L/home/aamor/petsc/linux-intel-dbg/lib
> > -Wl,-rpath,/opt/intel/mkl/lib/intel64 -L/opt/intel/mkl/lib/intel64
> > -Wl,-rpath,/opt/intel/impi/5.1.2.150/intel64/lib/debug_mt
> -L/opt/intel/impi/
> > 5.1.2.150/intel64/lib/debug_mt -Wl,-rpath,/opt/intel/impi/
> > 5.1.2.150/intel64/lib -L/opt/intel/impi/5.1.2.150/intel64/lib
> > -Wl,-rpath,/opt/intel/compilers_and_libraries_2016/linux/mkl/lib/intel64
> > -L/opt/intel/compilers_and_libraries_2016/linux/mkl/lib/intel64
> > -Wl,-rpath,/opt/intel/compilers_and_libraries_2016.
> 1.150/linux/compiler/lib/intel64_lin
> > -L/opt/intel/compilers_and_libraries_2016.1.150/linux/
> compiler/lib/intel64_lin
> > -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/4.4.7
> > -L/usr/lib/gcc/x86_64-redhat-linux/4.4.7
> > -Wl,-rpath,/opt/intel/mpi-rt/5.1/intel64/lib/debug_mt
> > -Wl,-rpath,/opt/intel/mpi-rt/5.1/intel64/lib -lmkl_intel_lp64
> > -lmkl_sequential -lmkl_core -lpthread -lX11 -lssl -lcrypto -lifport
> > -lifcore_pic -lmpicxx -ldl -Wl,-rpath,/opt/intel/impi/
> > 5.1.2.150/intel64/lib/debug_mt -L/opt/intel/impi/
> > 5.1.2.150/intel64/lib/debug_mt -Wl,-rpath,/opt/intel/impi/
> > 5.1.2.150/intel64/lib -L/opt/intel/impi/5.1.2.150/intel64/lib -lmpifort
> > -lmpi -lmpigi -lrt -lpthread -Wl,-rpath,/opt/intel/impi/
> > 5.1.2.150/intel64/lib/debug_mt -L/opt/intel/impi/
> > 5.1.2.150/intel64/lib/debug_mt -Wl,-rpath,/opt/intel/impi/
> > 5.1.2.150/intel64/lib -L/opt/intel/impi/5.1.2.150/intel64/lib
> > -Wl,-rpath,/opt/intel/compilers_and_libraries_2016/linux/mkl/lib/intel64
> > -L/opt/intel/compilers_and_libraries_2016/linux/mkl/lib/intel64
> > -Wl,-rpath,/opt/intel/compilers_and_libraries_2016.
> 1.150/linux/compiler/lib/intel64_lin
> > -L/opt/intel/compilers_and_libraries_2016.1.150/linux/
> compiler/lib/intel64_lin
> > -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/4.4.7
> > -L/usr/lib/gcc/x86_64-redhat-linux/4.4.7
> > -Wl,-rpath,/opt/intel/compilers_and_libraries_2016/linux/mkl/lib/intel64
> > -L/opt/intel/compilers_and_libraries_2016/linux/mkl/lib/intel64
> > -Wl,-rpath,/opt/intel/impi/5.1.2.150/intel64/lib/debug_mt
> > -Wl,-rpath,/opt/intel/impi/5.1.2.150/intel64/lib
> > -Wl,-rpath,/opt/intel/mpi-rt/5.1/intel64/lib/debug_mt
> > -Wl,-rpath,/opt/intel/mpi-rt/5.1/intel64/lib -limf -lsvml -lirng -lm
> -lipgo
> > -ldecimal -lcilkrts -lstdc++ -lgcc_s -lirc -lirc_s
> > -Wl,-rpath,/opt/intel/impi/5.1.2.150/intel64/lib/debug_mt
> -L/opt/intel/impi/
> > 5.1.2.150/intel64/lib/debug_mt -Wl,-rpath,/opt/intel/impi/
> > 5.1.2.150/intel64/lib -L/opt/intel/impi/5.1.2.150/intel64/lib
> > -Wl,-rpath,/opt/intel/compilers_and_libraries_2016/linux/mkl/lib/intel64
> > -L/opt/intel/compilers_and_libraries_2016/linux/mkl/lib/intel64
> > -Wl,-rpath,/opt/intel/compilers_and_libraries_2016.
> 1.150/linux/compiler/lib/intel64_lin
> > -L/opt/intel/compilers_and_libraries_2016.1.150/linux/
> compiler/lib/intel64_lin
> > -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/4.4.7
> > -L/usr/lib/gcc/x86_64-redhat-linux/4.4.7
> > -Wl,-rpath,/opt/intel/compilers_and_libraries_2016/linux/mkl/lib/intel64
> > -L/opt/intel/compilers_and_libraries_2016/linux/mkl/lib/intel64 -ldl
> >
> > exec/HELLO
> >
> > 4. Then I have seen that:
> > 4.1. If I set OMP_NUM_THREADS=2 and I remove -lpetsc and -lifcore_pic
> from
> > the last step, I got:
> > Hello from  0 out of  2
> > Hello from  1 out of  2
> > 4.2 But if add -lpetsc and -lifcore_pic (because I want to use PETSC) I
> get
> > this error:
> > Hello from  0 out of  2
> > forrtl: severe (40): recursive I/O operation, unit -1, file unknown
> > Image              PC                Routine            Line
> Source
> > HELLO              000000000041665C  Unknown               Unknown
> Unknown
> > HELLO              00000000004083C8  Unknown               Unknown
> Unknown
> > libiomp5.so        00007F9C603566A3  Unknown               Unknown
> Unknown
> > libiomp5.so        00007F9C60325007  Unknown               Unknown
> Unknown
> > libiomp5.so        00007F9C603246F5  Unknown               Unknown
> Unknown
> > libiomp5.so        00007F9C603569C3  Unknown               Unknown
> Unknown
> > libpthread.so.0    0000003CE76079D1  Unknown               Unknown
> Unknown
> > libc.so.6          0000003CE6AE88FD  Unknown               Unknown
> Unknown
> > If you set OMP_NUM_THREADS to 8, I get:
> > forrtl: severe (40): recursive I/O operation, unit -1, file unknown
> > forrtl: severe (40): recursive I/O operation, unit -1, file unknown
> > forrtl: severe (40): recursive I/O operation, unit -1, file unknown
> >
> > I am sorry if this is a trivial problem because I guess that lots of
> people
> > use PETSC with OpenMP in FORTRAN, but I have really done my best to
> figure
> > out where the error is. Can you help me?
> >
> > Thanks a lot!
> >
> > Adrian.
> >
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20180302/80ae238d/attachment-0001.html>


More information about the petsc-users mailing list