[petsc-users] Is there a way of having PETSc compile hypre with provided intel/mkl?
Mark Adams
mfadams at lbl.gov
Mon Jul 30 08:42:56 CDT 2018
Hypre embedded LAPACK in their source so I doubt you can have hypre use
MKL. Hypre supports OMP now, but I have been having a lot of numerical
problems with hypre/OMP on KNL.
On Mon, Jul 30, 2018 at 9:24 AM Bastian Löhrer <
bastian.loehrer at tu-dresden.de> wrote:
> On 30.07.2018 13:28, Matthew Knepley wrote:
>
> On Mon, Jul 30, 2018 at 5:48 AM Bastian Löhrer <
> bastian.loehrer at tu-dresden.de> wrote:
>
>> Dear PETSc users,
>>
>> im configuring PETSc to use an intel stack including intel/mkl:
>> ./configure
>> PETSC_ARCH=$PETSC_ARCH
>> \
>> --with-gnu-compilers=0
>> --with-vendor-compilers=intel \
>> --with-large-file-io=1
>> \
>> --CFLAGS="-L${I_MPI_ROOT}/intel64/lib -I${I_MPI_ROOT}/intel64/include
>> -lmpi" \
>> --CXXFLAGS="-L${I_MPI_ROOT}/intel64/lib -I${I_MPI_ROOT}/intel64/include
>> -lmpi -lmpicxx" \
>> --FFLAGS="-L${I_MPI_ROOT}/intel64/lib -I${I_MPI_ROOT}/intel64/include
>> -lmpi" \
>> --LDFLAGS="-L${I_MPI_ROOT}/intel64/lib -I${I_MPI_ROOT}/intel64/include
>> -lmpi" \
>> --with-blas-lapack-dir="${MKLROOT}/lib/intel64"
>> \
>> --download-hypre
>> \
>> --with-debugging=yes
>>
>>
>> * two questions:*
>>
>>
>> * 1)* the blas-lapack-dir option is not passed down to the compilation
>> of hypre according to $PETSC_DIR/$PETSC_ARCH/conf/hypre
>> *Is there a way of having PETSc compile hypre with my intel/mkl?*
>>
>
> This should happen. Please send configure.log so we can see what went on.
>
> My initial guess was that the file $PETSC_DIR/$PETSC_ARCH/conf/hypre lists
> the parameters which are used for compiling hypre. As I said, this file
> does not mention mkl anywhere.
> I may be mistaken though, because having a second look I do realize that
> the mkl library is mentioned in the configure log file in line 86661 ff.
> (where hypre is being configured) and at the end of the log.
>
> Here is my configure.log:
>
> https://cloudstore.zih.tu-dresden.de/index.php/s/b6rT0WMAKEMsj8S/download
>
> Here is that hypre file:
>
> https://cloudstore.zih.tu-dresden.de/index.php/s/TSfXQ2pgDw5ALZm/download
>
> Thanks,
> Bastian
>
>
>
>
>> * 2)* *Should I omit or include any option?* I have come across a few
>> options in previous configuration calls used at my department which I have
>> removed from my configuration call because
>>
>> a) I had the impression that they were of no additional use:
>>
>> - --with-cc=mpiicc --with-cxx=mpiicpc --with-fc=mpiifort --with-mpi=1
>> - --with-pic, because --with-pic=1 is the default anyway since 2.3.1,
>> is it not?
>> (https://www.mcs.anl.gov/petsc/documentation/changes/231.html)
>> - --with-mpiexec=mpirun
>> - --with-mpi-compilers=1
>> - --known-mpi-shared=1
>> - --with-mpi-dir=...
>> - COPTFLAGS="-g" CXXOPTFLAGS="-g" FOPTFLAGS="-g" because
>> --with-debugging=yes adds them anyway
>>
>> Yes, getting rid of all these is fine.
>
>> b) because I couldn't figure out what they were actually for:
>>
>> - --configModules=PETSc.Configure
>> - --optionsModule=PETSc.compilerOptions
>>
>> Those are added automatically. These are hooks so that you can completely
> change the system without
> getting rid of the low-level tests.
>
>> c) others:
>>
>> - --known-64-bit-blas-indices I guess it wouldn't hurt anyway so I
>> guess I'll include this option the next time I configure petsc
>> (
>> http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Sys/PetscBLASInt.html
>> )
>>
>> We have a bunch of -known-* options. They are used for any test that has
> to execute. Batch environments cannot,
> and thus you need to specify all of them. You do not need it unless you
> are using --with-batch.
>
> Okay, fantastic, thank you!
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20180730/7643f8eb/attachment.html>
More information about the petsc-users
mailing list