[petsc-users] petsc and goto blas

David Fuentes fuentesdt at gmail.com
Wed Aug 31 00:29:17 CDT 2011


thanks. ended up being something w/ the mpi compilers I was using.
difficult to track down.
works with the --download-mpi=yes petsc config option though!

when using the "--download-mpi=yes"  option on a small cluster,
are the configure scripts able to detect an infiniband switch and
configure/install mpich to communicate over it accordingly ?

either way,
thanks!
df

On Mon, Aug 29, 2011 at 7:40 PM, Barry Smith <bsmith at mcs.anl.gov> wrote:
>
> On Aug 29, 2011, at 5:49 PM, David Fuentes wrote:
>
>> Is there any reason why petsc compiled to link with goto blas shared
>> libraries would not run multi-threaded by default ?
>
>   We don't do anything to prevent it from using multi-threaded. Te first thing I would suggest is make sure it truly is linking and running against the threaded goblas and something else doesn't get in between.
>
>   Barry
>
>>
>> I've set (OMP/GOTO)_NUM_THREADS=8
>>
>> but when I call dgemm from PETSc I can't seem to get it to run on
>> multiple cores (<= 100% cpu usage from top).
>> I checked and the test routines installed with goto library build
>> called w/o petsc runs multi-threaded (~600% cpu usage on top).
>>
>>
>> I'm calling MatMatMult with dense matrices from petsc4py.
>>
>> from petsc4py import PETSc
>> import numpy
>> n = 10000
>> J1 = PETSc.Mat().createDense([n, n], array=numpy.random.rand(n,n),
>> comm=PETSc.COMM_WORLD)
>> J1.assemblyBegin();J1.assemblyEnd()
>> J2 = PETSc.Mat().createDense([n, n],
>> array=numpy.random.rand(n,n),comm=PETSc.COMM_WORLD)
>> J2.assemblyBegin(); J2.assemblyEnd()
>> X = J1.matMult(J2)
>
>


More information about the petsc-users mailing list