[petsc-dev] Error compiling PETSc on Windows

Satish Balay balay at mcs.anl.gov
Sat Jul 7 19:55:28 CDT 2018


On Sat, 7 Jul 2018, Hector E Barrios Molano wrote:

> Thanks Barry and Satish for your answers.
> 
> I installed the correct version of hypre. Also, I changed the paths to short
> dos paths as Satish suggested. Now PETSc compiles without problems and the
> tests are ok.
> 
> Regarding Satish notes:
> 
> I had to include -LIBS because otherwise the configure script stops while
> testing blas-lapack with the following error:
> 
> ===============================================================================
> TESTING: checkLib from
> config.packages.BlasLapack(config/BuildSystem/config/packages/BlasLapack.py:114)*******************************************************************************
>          UNABLE to CONFIGURE with GIVEN OPTIONS    (see configure.log for
> details):
> -------------------------------------------------------------------------------
> You set a value for --with-blaslapack-lib=<lib>, but
> ['-L/cygdrive/c/PROGRA~2/INTELS~1/COMPIL~1/windows/mkl/lib/intel64',
> 'mkl_intel_lp64.lib', 'mkl_core.lib', 'mkl_intel_thread.lib'] cannot be used
> *******************************************************************************

Well send us configure.log for this failure [and also configure.log
for the subsequent successful one with LIB options]

> 
> Do I need to turn on the MKL sparse functionality?

I'm not sure what you are using PETSc for. If you elaborate - perhaps
Richard might be able to suggest if MKL_SPARSE is useful for you.

> what is the difference
> between MKL_SPARSE, MKL_SPARSE_OPTIMIZE and MKL_SPARSE_SP2M in the configure
> options?

Older MKL versions have limited sparse functionality. So these flags
[detected by configure] control the level of MKL sparse functionality
used.

> Why is better to use sequential MKL?

Because PETSc primarily uses MPI model - and uses sequential blas in this model.

> Is it possible to use a hybrid MPI -
> OpenMP approach? for example MPI for internode communication and OpenMP for
> intranode computation? In that case would it be good to use threaded MKL +
> MPI?

You might want to check the trhead "GAMG error with MKL" on petsc-dev list.

Satish


More information about the petsc-dev mailing list