[petsc-dev] Status of pthreads and OpenMP support

John Fettig john.fettig at gmail.com
Thu Oct 25 12:53:00 CDT 2012


I'm curious about the status of an SMP or hybrid SMP/DMP PETSc
library.  What is the tentative timeline?  Will there be functional
support for threads in the next release?

I built petsc-dev with pthreads and OpenMP enabled via --with-openmp=1
--with-pthreadclasses=1 and added PETSC_THREADCOMM_ACTIVE to
$PETSC_ARCH/include/petscconf.h.  My machine is a dual Westmere
system, with 2x 6 core CPU's.  Then I ran the example given by the
installation documentation:

mpirun -np $i ./ex19 -threadcomm_type {openmp,pthread}
-threadcomm_nthreads $j -pc_type none -da_grid_x 100 -da_grid_y 100
-log_summary -mat_no_inode -preload off

I've attached the log_summary output using openmp, with np=1,2 and
nthreads=1,2,4,6.  With openmp, the speedup is 5.358e+00/9.138e-01 =
5.9 going from 1 process, 1 thread to 2 processes, 6 threads each.

With pthreads something is clearly not working as designed as the time
for two threads is 44x slower than the serial time.  I've attached the
log summary for 1 to 6 threads.

With non-threaded PETSc, I typically see ~50% parallel efficiency on
all cores for CFD problems.  Is it wrong for me to hope that a
threaded version can improve this? Or should I be satisfied that I
seem to be achieving about (memory bandwidth)/6 total performance out
of each socket?

Thanks,
John
-------------- next part --------------
A non-text attachment was scrubbed...
Name: openmp.out
Type: application/octet-stream
Size: 99845 bytes
Desc: not available
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20121025/feb024d9/attachment.obj>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: pthread.out
Type: application/octet-stream
Size: 50039 bytes
Desc: not available
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20121025/feb024d9/attachment-0001.obj>


More information about the petsc-dev mailing list