[petsc-users] Using PETSC with an openMP program
Matthew Knepley
knepley at gmail.com
Fri Mar 2 09:40:57 CST 2018
On Fri, Mar 2, 2018 at 10:32 AM, Smith, Barry F. <bsmith at mcs.anl.gov> wrote:
>
> PETSc is for writing parallel codes using MPI, it is not for writing
> parallel codes with OpenMP. http://www.mcs.anl.gov/petsc/
> miscellaneous/petscthreads.html
To follow up:
1) We do not have any expertise debugging OpenMP problems, and they are
quite compiler specific so it is a big pain. This contributed to us
removing explicit OpenMP from PETSc
2) Nothing prevents you from using OpenMP yourself, but we do not know
how to debug things like this (whereas we can help with MPI)
3) We would caution you about using OpenMP. MPI can be used anywhere you
can use OpenMP (for example, MPI can declare shared memory regions),
and it is generally faster, cleaner, and more maintainable.
Thanks,
Matt
>
> Barry
>
>
> > On Mar 2, 2018, at 9:00 AM, Adrián Amor <aamor at pa.uc3m.es> wrote:
> >
> > Hi all,
> >
> > I have been working in the last months with PETSC in a FEM program
> written on FORTRAN, so far sequential. Now, I want to parallelize it with
> OpenMP and I have found some problems. Finally, I have built a mockup
> program trying to localize the error.
> >
> > 1. I have compiled PETSC with these options:
> > ./configure --with-cc=mpiicc --with-cxx=mpiicpc --with-fc=mpiifort
> --with-blas-lapack-dir=/opt/intel/mkl/lib/intel64/ --with-debugging=1
> --with-scalar-type=complex --with-threadcomm --with-pthreadclasses
> --with-openmp --with-openmp-include=/opt/intel/compilers_and_libraries_
> 2016.1.150/linux/compiler/lib/intel64_lin --with-openmp-lib=/opt/intel/
> compilers_and_libraries_2016.1.150/linux/compiler/lib/intel64_lin/libiomp5.a
> PETSC_ARCH=linux-intel-dbg PETSC-AVOID-MPIF-H=1
> >
> > (I have tried also removing --with-threadcomm --with-pthreadclasses
> and with libiomp5.so).
> >
> > 2. The program to be executed is composed of two files, one is
> hellocount.F90:
> > MODULE hello_count
> > use omp_lib
> > IMPLICIT none
> >
> > CONTAINS
> > subroutine hello_print ()
> > integer :: nthreads,mythread
> >
> > !pragma hello-who-omp-f
> > !$omp parallel
> > nthreads = omp_get_num_threads()
> > mythread = omp_get_thread_num()
> > write(*,'("Hello from",i3," out of",i3)') mythread,nthreads
> > !$omp end parallel
> > !pragma end
> > end subroutine hello_print
> > END MODULE hello_count
> >
> > and the other one is hellocount_main.F90:
> > Program Hello
> >
> > USE hello_count
> >
> > call hello_print
> >
> > STOP
> >
> > end Program Hello
> >
> > 3. To compile these two functions I use:
> > rm -rf _obj
> > mkdir _obj
> >
> > ifort -E -I/home/aamor/petsc/include -I/home/aamor/petsc/linux-intel-dbg/include
> -c hellocount.F90 >_obj/hellocount.f90
> > ifort -E -I/home/aamor/petsc/include -I/home/aamor/petsc/linux-intel-dbg/include
> -c hellocount_main.F90 >_obj/hellocount_main.f90
> >
> > mpiifort -CB -g -warn all -O0 -shared-intel -check:none -qopenmp -module
> _obj -I./_obj -I/home/aamor/MUMPS_5.1.2/include
> -I/opt/intel/compilers_and_libraries_2016.1.150/linux/mkl/include
> -I/opt/intel/compilers_and_libraries_2016.1.150/linux/mkl/include/intel64/lp64/
> -I/home/aamor/petsc/include -I/home/aamor/petsc/linux-intel-dbg/include
> -o _obj/hellocount.o -c _obj/hellocount.f90
> > mpiifort -CB -g -warn all -O0 -shared-intel -check:none -qopenmp -module
> _obj -I./_obj -I/home/aamor/MUMPS_5.1.2/include
> -I/opt/intel/compilers_and_libraries_2016.1.150/linux/mkl/include
> -I/opt/intel/compilers_and_libraries_2016.1.150/linux/mkl/include/intel64/lp64/
> -I/home/aamor/petsc/include -I/home/aamor/petsc/linux-intel-dbg/include
> -o _obj/hellocount_main.o -c _obj/hellocount_main.f90
> >
> > mpiifort -CB -g -warn all -O0 -shared-intel -check:none -qopenmp -module
> _obj -I./_obj -o exec/HELLO _obj/hellocount.o _obj/hellocount_main.o
> /home/aamor/lib_tmp/libarpack_LinuxIntel15.a /home/aamor/MUMPS_5.1.2/lib/libzmumps.a
> /home/aamor/MUMPS_5.1.2/lib/libmumps_common.a /home/aamor/MUMPS_5.1.2/lib/libpord.a
> /home/aamor/parmetis-4.0.3/lib/libparmetis.a /home/aamor/parmetis-4.0.3/lib/libmetis.a
> -L/opt/intel/compilers_and_libraries_2016.1.150/linux/mkl/lib/intel64
> -lmkl_scalapack_lp64 -lmkl_blacs_intelmpi_lp64 -lpetsc -lmkl_intel_lp64
> -lmkl_intel_thread -lmkl_core -lmkl_lapack95_lp64 -liomp5 -lpthread -lm
> -L/home/aamor/lib_tmp -lgidpost -lz /home/aamor/lua-5.3.3/src/liblua.a
> /home/aamor/ESEAS-master/libeseas.a -Wl,-rpath,/home/aamor/petsc/linux-intel-dbg/lib
> -L/home/aamor/petsc/linux-intel-dbg/lib -Wl,-rpath,/opt/intel/mkl/lib/intel64
> -L/opt/intel/mkl/lib/intel64 -Wl,-rpath,/opt/intel/impi/5.
> 1.2.150/intel64/lib/debug_mt -L/opt/intel/impi/5.1.2.150/
> intel64/lib/debug_mt -Wl,-rpath,/opt/intel/impi/5.1.2.150/intel64/lib
> -L/opt/intel/impi/5.1.2.150/intel64/lib -Wl,-rpath,/opt/intel/
> compilers_and_libraries_2016/linux/mkl/lib/intel64
> -L/opt/intel/compilers_and_libraries_2016/linux/mkl/lib/intel64
> -Wl,-rpath,/opt/intel/compilers_and_libraries_2016.
> 1.150/linux/compiler/lib/intel64_lin -L/opt/intel/compilers_and_
> libraries_2016.1.150/linux/compiler/lib/intel64_lin
> -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/4.4.7
> -L/usr/lib/gcc/x86_64-redhat-linux/4.4.7 -Wl,-rpath,/opt/intel/mpi-rt/5.1/intel64/lib/debug_mt
> -Wl,-rpath,/opt/intel/mpi-rt/5.1/intel64/lib -lmkl_intel_lp64
> -lmkl_sequential -lmkl_core -lpthread -lX11 -lssl -lcrypto -lifport
> -lifcore_pic -lmpicxx -ldl -Wl,-rpath,/opt/intel/impi/5.
> 1.2.150/intel64/lib/debug_mt -L/opt/intel/impi/5.1.2.150/
> intel64/lib/debug_mt -Wl,-rpath,/opt/intel/impi/5.1.2.150/intel64/lib
> -L/opt/intel/impi/5.1.2.150/intel64/lib -lmpifort -lmpi -lmpigi -lrt
> -lpthread -Wl,-rpath,/opt/intel/impi/5.1.2.150/intel64/lib/debug_mt
> -L/opt/intel/impi/5.1.2.150/intel64/lib/debug_mt
> -Wl,-rpath,/opt/intel/impi/5.1.2.150/intel64/lib -L/opt/intel/impi/
> 5.1.2.150/intel64/lib -Wl,-rpath,/opt/intel/compilers_and_libraries_2016/linux/mkl/lib/intel64
> -L/opt/intel/compilers_and_libraries_2016/linux/mkl/lib/intel64
> -Wl,-rpath,/opt/intel/compilers_and_libraries_2016.
> 1.150/linux/compiler/lib/intel64_lin -L/opt/intel/compilers_and_
> libraries_2016.1.150/linux/compiler/lib/intel64_lin
> -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/4.4.7
> -L/usr/lib/gcc/x86_64-redhat-linux/4.4.7 -Wl,-rpath,/opt/intel/
> compilers_and_libraries_2016/linux/mkl/lib/intel64
> -L/opt/intel/compilers_and_libraries_2016/linux/mkl/lib/intel64
> -Wl,-rpath,/opt/intel/impi/5.1.2.150/intel64/lib/debug_mt
> -Wl,-rpath,/opt/intel/impi/5.1.2.150/intel64/lib
> -Wl,-rpath,/opt/intel/mpi-rt/5.1/intel64/lib/debug_mt
> -Wl,-rpath,/opt/intel/mpi-rt/5.1/intel64/lib -limf -lsvml -lirng -lm
> -lipgo -ldecimal -lcilkrts -lstdc++ -lgcc_s -lirc -lirc_s
> -Wl,-rpath,/opt/intel/impi/5.1.2.150/intel64/lib/debug_mt
> -L/opt/intel/impi/5.1.2.150/intel64/lib/debug_mt
> -Wl,-rpath,/opt/intel/impi/5.1.2.150/intel64/lib -L/opt/intel/impi/
> 5.1.2.150/intel64/lib -Wl,-rpath,/opt/intel/compilers_and_libraries_2016/linux/mkl/lib/intel64
> -L/opt/intel/compilers_and_libraries_2016/linux/mkl/lib/intel64
> -Wl,-rpath,/opt/intel/compilers_and_libraries_2016.
> 1.150/linux/compiler/lib/intel64_lin -L/opt/intel/compilers_and_
> libraries_2016.1.150/linux/compiler/lib/intel64_lin
> -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/4.4.7
> -L/usr/lib/gcc/x86_64-redhat-linux/4.4.7 -Wl,-rpath,/opt/intel/
> compilers_and_libraries_2016/linux/mkl/lib/intel64
> -L/opt/intel/compilers_and_libraries_2016/linux/mkl/lib/intel64 -ldl
> >
> > exec/HELLO
> >
> > 4. Then I have seen that:
> > 4.1. If I set OMP_NUM_THREADS=2 and I remove -lpetsc and -lifcore_pic
> from the last step, I got:
> > Hello from 0 out of 2
> > Hello from 1 out of 2
> > 4.2 But if add -lpetsc and -lifcore_pic (because I want to use PETSC) I
> get this error:
> > Hello from 0 out of 2
> > forrtl: severe (40): recursive I/O operation, unit -1, file unknown
> > Image PC Routine Line
> Source
> > HELLO 000000000041665C Unknown Unknown
> Unknown
> > HELLO 00000000004083C8 Unknown Unknown
> Unknown
> > libiomp5.so 00007F9C603566A3 Unknown Unknown
> Unknown
> > libiomp5.so 00007F9C60325007 Unknown Unknown
> Unknown
> > libiomp5.so 00007F9C603246F5 Unknown Unknown
> Unknown
> > libiomp5.so 00007F9C603569C3 Unknown Unknown
> Unknown
> > libpthread.so.0 0000003CE76079D1 Unknown Unknown
> Unknown
> > libc.so.6 0000003CE6AE88FD Unknown Unknown
> Unknown
> > If you set OMP_NUM_THREADS to 8, I get:
> > forrtl: severe (40): recursive I/O operation, unit -1, file unknown
> > forrtl: severe (40): recursive I/O operation, unit -1, file unknown
> > forrtl: severe (40): recursive I/O operation, unit -1, file unknown
> >
> > I am sorry if this is a trivial problem because I guess that lots of
> people use PETSC with OpenMP in FORTRAN, but I have really done my best to
> figure out where the error is. Can you help me?
> >
> > Thanks a lot!
> >
> > Adrian.
>
>
--
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
https://www.cse.buffalo.edu/~knepley/ <http://www.caam.rice.edu/~mk51/>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20180302/5c859884/attachment-0001.html>
More information about the petsc-users
mailing list