[petsc-users] Petsc messing with OpenMP
Timothée Nicolas
timothee.nicolas at gmail.com
Tue Jan 30 11:36:33 CST 2018
Thank you for your answer,
I am not responsible for the choices in this code to begin with, but in any
case, as you can see in the example, I am not using any petsc call. Also in
our code, even though openmp is used, this is never in interaction with
petsc, the latter being used only at a specific (but crucial) place in the
code, and without openMP. In that sense, the model we are using is NOT some
'Petsc with threads model', as described in the page you sent me.
Nevertheless, it appears that just linking petsc is enough to perturb the
behavior of openMP. I believe it may require further investigation. On top
of the bug I report in my first message, I have noticed very weird openMP
behavior lately, the kind of bugs that 'don't make sense' at first sight
(would be too long/boring to give details at this stage). I am starting to
realize that it may have something to do with petsc.
Again, the webpage you have sent me seems to adopt the point of view that
hybrid MPI/openMP is useless from the performance point of view, and/or
that Petsc routines should not be threaded. It does not, however, explain
why the behaviour of openMP may be changed by merely linking a program with
petsc, without a single petsc call in the program.
Best
2018-01-30 18:02 GMT+01:00 Smith, Barry F. <bsmith at mcs.anl.gov>:
>
> I don't know what you are trying to do with OpenMP and PETSc, nor do I
> understand why anyone would use OpenMP, but you cannot call virtually any
> PETSc function or operation while you are using threads.
>
> Best to use PETSc as intended, with one MPI process per core or hardware
> thread and not use OpenMP at all. http://www.mcs.anl.gov/petsc/
> miscellaneous/petscthreads.html
>
>
> Barry
>
>
> > On Jan 30, 2018, at 10:57 AM, Timothée Nicolas <
> timothee.nicolas at gmail.com> wrote:
> >
> > Dear petsc team,
> >
> > For a while, I have been wondering why I have never managed to print
> what threads are doing in an openMP region in my FORTRAN programs. Some
> people told me it was normal because threads will get confused all trying
> to write at the same time. However I realised today that the problem seems
> to be related to petsc. I have a super simple "hello world" example that
> reproduces the problem :
> >
> > program hello
> > !$ use omp_lib
> > implicit none
> > integer nthreads, tid
> >
> > !$omp parallel private(nthreads, tid)
> > tid = omp_get_thread_num()
> > print *, 'hello world from thread = ', tid
> >
> > if (tid .eq. 0) then
> > nthreads = omp_get_num_threads()
> > print *, 'number of threads = ', nthreads
> > end if
> > !$omp end parallel
> >
> > end program hello
> >
> > If I compile it with
> >
> > mpif90 -qopenmp -o omp main.f90
> >
> > Then I get no problem. But if I link petsc library (as is the case in my
> code) :
> >
> > mpif90 -qopenmp -o omp main.f90 -L/home/timotheenicolas/petsc-3.7.3/arch-linux2-c-debug/lib
> -lpetsc
> >
> >
> > Then I get the following error after execution of export
> OMP_NUM_THREADS=2;./omp
> >
> > hello world from thread = 0
> > number of threads = 2
> > forrtl: severe (40): recursive I/O operation, unit -1, file unknown
> > Image PC Routine Line
> Source
> > omp 0000000000403BC8 Unknown Unknown
> Unknown
> > omp 0000000000403572 Unknown Unknown
> Unknown
> > libiomp5.so 00002AAAAD3146A3 Unknown Unknown
> Unknown
> > libiomp5.so 00002AAAAD2E3007 Unknown Unknown
> Unknown
> > libiomp5.so 00002AAAAD2E26F5 Unknown Unknown
> Unknown
> > libiomp5.so 00002AAAAD3149C3 Unknown Unknown
> Unknown
> > libpthread.so.0 00002AAAAD5BADC5 Unknown Unknown
> Unknown
> > libc.so.6 00002AAAAD8C5CED Unknown Unknown
> Unknown
> >
> >
> > This means that in my programs using petsc, I can't use prints to see
> exactly what the threads are doing, which is a pain when debug is required
> (that is to say, all the time). Is this issue expected ?
> >
> > Thx in advance
> >
> > Timothee NICOLAS
> >
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20180130/783eb4f5/attachment.html>
More information about the petsc-users
mailing list