[petsc-users] Strange efficiency in PETSc-dev using OpenMP

Barry Smith bsmith at mcs.anl.gov
Sun Sep 22 16:24:33 CDT 2013


  If you run the Openmp compiled version WITHOUT the 

-threadcomm_nthreads 1
-threadcomm_type openmp

  command line options is it still slow?

   I want to understand if the MPI compile options triggering the much slower run.

   Barry

On Sep 22, 2013, at 2:43 PM, Danyang Su <danyang.su at gmail.com> wrote:

> Hi Barry,
> 
> Thanks, please find the answer bellow.
> 
> On 22/09/2013 9:53 AM, Barry Smith wrote:
>> 1)      #                          WARNING!!!                    #
>>       #                                                        #
>>       #   This code was compiled with a debugging option,      #
>>       #   To get timing results run ./configure                #
>>       #   using --with-debugging=no, the performance will      #
>>       #   be generally two or three times faster.              #
>> 
>>     Never time without optimization, it can give very misleading information because different parts of the code speedup very differently when optimized
>             With optimization, the problem still exists. See attached log, using 1 thread and 4 threads.
>> 2) Where are the 4 MPI processes being placed on your system? Are they being placed on 4 cores on the same CPU (as with the OpenMP run) or possibly on different CPUs?
>            Yes, the system information are as follows
>            OS: Windows 7 X64 Pro, CYGWIN
>            Processor: Intel Xeon E5-2620 2.0GHz, 6cores/12threads
>            Memory: 16GB
>            Compiler: Intel Visual Fortran V13.1.
>> 3) Do you have any OpenMP pragmas in your code. Make a run where you take them all out
>            For the current, I have no OpenMP pragmas in the code. The code is just the same as I used for PETSc MPI version and it works fine when using MPI.
>> 
>> 4) Both runs are actually taking very little time in the solver;
>> 
>>   KSPSolve               1 1.0 9.2897e-002
>> KSPSolve               1 1.0 2.9056e-001
>> 
>>    How are you getting your matrix? From a file?
>            Yes, the matrix are currently from the files. Should this be the problem? The timing is started after the reading matrix. And I didn't see the speedup for the solver, the runtime for kspsolve are almost the same.
> 
>      nthreads = 1,  KSPSolve               1 1.0   6.2800e-002
> 
>            nthreads = 4,       KSPSolve                               1 1.0 5.5090e-002
> 
> The main question is that the program is stuck in the following codes when run with openmp, but no problem when run with mpi.
> 
>                do i = istart, iend - 1
>                   ii = ia_in(i+1)
>                   jj = ia_in(i+2)
>                   call MatSetValues(a, ione, i, jj-ii, ja_in(ii:jj-1)-1, a_in(ii:jj-1), Insert_Values, ierr)
>                end do
> 
> The testing codes has also been attached. I have removed some unnecessary codes, but some are still there.
>> 
>>    Barry
>> 
>> 
>> On Sep 21, 2013, at 11:06 PM, Danyang Su <danyang.su at gmail.com> wrote:
>> 
>>> Hi Shri,
>>> 
>>> Thanks for your info. It can work with the option -threadcomm_type openmp. But another problem arises, as described as follows.
>>> 
>>> The sparse matrix is  53760*53760 with 1067392 non-zero entries. If the codes is compiled using PETSc-3.4.2, it works fine, the equations can be solved quickly and I can see the speedup. But if the code is compiled using PETSc-dev with OpenMP option, it takes a long time in solving the equations and I cannot see any speedup when more processors are used.
>>> 
>>> For PETSc-3.4.2,  run by "mpiexec -n 4 ksp_inhm_d -log_summary log_mpi4_petsc3.4.2.log", the iteration and runtime are:
>>> Iterations     6 time_assembly  0.4137E-01 time_ksp  0.9296E-01
>>> 
>>> For PETSc-dev,  run by "mpiexec -n 1 ksp_inhm_d -threadcomm_type openmp -threadcomm_nthreads 4 -log_summary log_openmp_petsc_dev.log", the iteration and runtime are:
>>> Iterations     6 time_assembly  0.3595E+03 time_ksp  0.2907E+00
>>> 
>>> Most of the time 'time_assembly  0.3595E+03' is spent on the following codes
>>>                 do i = istart, iend - 1
>>>                    ii = ia_in(i+1)
>>>                    jj = ia_in(i+2)
>>>                    call MatSetValues(a, ione, i, jj-ii, ja_in(ii:jj-1)-1, a_in(ii:jj-1), Insert_Values, ierr)
>>>                 end do
>>> 
>>> The log files for both PETSc-3.4.2 and PETSc-dev are attached.
>>> 
>>> Is there anything wrong with my codes or with running option? The above codes works fine when using MPICH.
>>> 
>>> Thanks and regards,
>>> 
>>> Danyang
>>> 
>>> On 21/09/2013 2:09 PM, Shri wrote:
>>>> There are three thread communicator types in PETSc. The default is "no thread" which is basically a non-threaded version. The other two types are "openmp" and "pthread". If you want to use OpenMP then use the option -threadcomm_type openmp.
>>>> 
>>>> Shri
>>>> 
>>>> On Sep 21, 2013, at 3:46 PM, Danyang Su <danyang.su at gmail.com> wrote:
>>>> 
>>>>> Hi Barry,
>>>>> 
>>>>> Thanks for the quick reply.
>>>>> 
>>>>> After changing
>>>>> #if defined(PETSC_HAVE_PTHREADCLASSES) || defined (PETSC_HAVE_OPENMP)
>>>>> to
>>>>> #if defined(PETSC_HAVE_PTHREADCLASSES)
>>>>> and comment out
>>>>> #elif defined(PETSC_HAVE_OPENMP)
>>>>> PETSC_EXTERN PetscStack *petscstack;
>>>>> 
>>>>> It can be compiled and validated with "make test".
>>>>> 
>>>>> But I still have questions on running the examples. After rebuild the codes (e.g., ksp_ex2f.f), I can run it with "mpiexec -n 1 ksp_ex2f", or "mpiexec -n 4 ksp_ex2f", or "mpiexec -n 1 ksp_ex2f -threadcomm_nthreads 1", but if I run it with "mpiexec -n 1 ksp_ex2f -threadcomm_nthreads 4", there will be a lot of error information (attached).
>>>>> 
>>>>> The codes is not modified and there is no OpenMP routines in it. For the current development in my project, I want to keep the OpenMP codes in calculating matrix values, but want to solve it with PETSc (OpenMP). Is it possible?
>>>>> 
>>>>> Thanks and regards,
>>>>> 
>>>>> Danyang
>>>>> 
>>>>> 
>>>>> 
>>>>> On 21/09/2013 7:26 AM, Barry Smith wrote:
>>>>>>   Danyang,
>>>>>> 
>>>>>>      I don't think the  || defined (PETSC_HAVE_OPENMP)   belongs in the code below.
>>>>>> 
>>>>>> /*  Linux functions CPU_SET and others don't work if sched.h is not included before
>>>>>>     including pthread.h. Also, these functions are active only if either _GNU_SOURCE
>>>>>>     or __USE_GNU is not set (see /usr/include/sched.h and /usr/include/features.h), hence
>>>>>>     set these first.
>>>>>> */
>>>>>> #if defined(PETSC_HAVE_PTHREADCLASSES) || defined (PETSC_HAVE_OPENMP)
>>>>>> 
>>>>>> Edit include/petscerror.h and locate these lines and remove that part and then rerun make all.  Let us know if it works or not.
>>>>>> 
>>>>>>    Barry
>>>>>> 
>>>>>> i.e. replace
>>>>>> 
>>>>>> #if defined(PETSC_HAVE_PTHREADCLASSES) || defined (PETSC_HAVE_OPENMP)
>>>>>> 
>>>>>> with
>>>>>> 
>>>>>> #if defined(PETSC_HAVE_PTHREADCLASSES)
>>>>>> 
>>>>>> On Sep 21, 2013, at 6:53 AM, Matthew Knepley
>>>>>> <petsc-maint at mcs.anl.gov>
>>>>>>  wrote:
>>>>>> 
>>>>>> 
>>>>>>> On Sat, Sep 21, 2013 at 12:18 AM, Danyang Su <danyang.su at gmail.com>
>>>>>>>  wrote:
>>>>>>> Hi All,
>>>>>>> 
>>>>>>> I got error information in compiling petsc-dev with openmp in cygwin. Before, I have successfully compiled petsc-3.4.2 and it works fine.
>>>>>>> The log files have been attached.
>>>>>>> 
>>>>>>> The OpenMP configure test is wrong. It clearly fails to find pthread.h, but the test passes. Then in petscerror.h
>>>>>>> we guard pthread.h using PETSC_HAVE_OPENMP. Can someone who knows OpenMP fix this?
>>>>>>> 
>>>>>>>     Matt
>>>>>>>  Thanks,
>>>>>>> 
>>>>>>> Danyang
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> -- 
>>>>>>> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.
>>>>>>> -- Norbert Wiener
>>>>>>> 
>>>>> <error.txt>
>>> <log_mpi4_petsc3.4.2.log><log_openmp_petsc_dev.log>
> 
> <log_openmp_petsc_dev_opt_1.log><log_openmp_petsc_dev_opt_4.log><ksp_inhm_test.F90>



More information about the petsc-users mailing list