[petsc-users] thread model

Shri abhyshr at mcs.anl.gov
Sat Nov 3 23:50:15 CDT 2012


On Nov 3, 2012, at 8:20 PM, Mohammad Mirzadeh wrote:

> Hum ... That did not help. I found this -threadcomm_affinities 0,1,2,3,4,5,6,7 but that also did nothing. Is that option any relevant?
This option currently only works with pthreads.
> 
> Another question: the linux box I'm testing on has 2 quad-cores (8 cores/2 sockets). If I run the code with mpirun -np 8, I get about 3X which makes sense to me. However, If I run with either pthread (which seems to use all cores) or openmp (which always defaults to 1 no matter what)
As I said before you have to set the core affinity for OpenMP through environment variables.
> I get the same performance as serial.
i) Which example are you running? Please send output of -log_summary.
ii) Only the vector and a few matrix operations are threaded currently. There are no threaded preconditioners yet.
> Does this mean there is something messed up with the hardware and/or how mpi/openmp/pthread is set up?

Shri
> 
> 
> On Fri, Nov 2, 2012 at 8:24 PM, Shri <abhyshr at mcs.anl.gov> wrote:
> 
> On Nov 2, 2012, at 9:54 PM, Mohammad Mirzadeh wrote:
> 
> > Hi,
> >
> > To use the new thread model in PETSc, does it suffice to run the code with the following?
> >
> > -threadcomm_type <openmp/pthread> -threadcomm_nthreads <#>
> >
> > When I run the code with openmp, only 1 processor/core is active (looking at top). When using pthread, all cores are active. Am I missing something?
> OpenMP defaults to binding all threads to a single core if the cpu affinity is not specified explicitly. If you
> are using GNU OpenMP then you can bind the threads to specific cores using the environment variable GOMP_CPU_AFFINITY http://gcc.gnu.org/onlinedocs/libgomp/GOMP_005fCPU_005fAFFINITY.html
> If you are some other OpenMP implementation then check its manual to see how to set cpu affinity.
> 
> Shri
> 
> 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20121103/88ce0677/attachment.html>


More information about the petsc-users mailing list