building petsc for myrinet

Satish Balay balay at mcs.anl.gov
Mon Apr 16 11:44:10 CDT 2007


> {/usr/local/Cluster-Apps/intel/mkl/8.0/lib/em64t/libguide.so: undefined
> reference to `pthread_atfork'

try using:

--with-blas-lapack-lib=\[/usr/local/Cluster-Apps/intel/mkl/8.0/lib/em64t/libmkl_lapack64.so,libmkl_def.so\,libguide.so\,-lpthread\]

or just:

--with-blas-lapack-dir=/usr/local/Cluster-Apps/intel/mkl/8.0

Satish



On Mon, 16 Apr 2007, SLIM H.A. wrote:

> Dear users
> 
> I have been able to build petsc for ethernet interconnect with static
> and shared object libraries using the intel compiler and intel's mkl
> libraries for lapack/blas. 
> 
> Now I want to build the libraries for mpi over myrinet as well.
> Using the same configuration arguments as before (and mpiCC wrapping
> intel compiler and myrinet libraries) eg
> 
> >./config/configure.py
> --with-blas-lapack-lib=\[/usr/local/Cluster-Apps/intel/mkl/8.0/lib/em64t
> /libmkl_lapack64.so\,libmkl_def.so\,libguide.so\]
> --with-vendor-compilers=intel --with-gnu-compilers=0 --with-shared
> 
> now I get the error
> 
> ************************************************************************
> *********
>          UNABLE to CONFIGURE with GIVEN OPTIONS    (see configure.log
> for details):
> ------------------------------------------------------------------------
> ---------------
> You set a value for --with-blas-lapack-lib=<lib>, but
> ['/usr/local/Cluster-Apps/intel/mkl/8.0/lib/em64t/libmkl_lapack64.so',
> 'libmkl_def.so', 'libguide.so'] cannot be used
> ************************************************************************
> *********
> 
> Examining the configure.log shows that a reference to a pthread function
> is undefined:
> 
> sh: 
> Possible ERROR while running linker:
> /usr/local/Cluster-Apps/intel/mkl/8.0/lib/em64t/libguide.so: undefined
> reference to `pthread_atfork'
>  output: ret = 256
> error message =
> {/usr/local/Cluster-Apps/intel/mkl/8.0/lib/em64t/libguide.so: undefined
> reference to `pthread_atfork'
> 
> 
> Should pthread.so added somewhere in the configuration?
> 
> Thanks for any advice
> 
> Henk
> 
> 




More information about the petsc-users mailing list