[petsc-users] FW: [alcf-support #195845] PETSc compilation

Satish Balay balay at mcs.anl.gov
Thu Jan 23 13:33:19 CST 2014


After getting allocation [for you project] you can continue following
suggestions printed by configure below - and run 'conftest' on a node.

The option '--prefix=/soft/libraries/petsc/3.3-p2/xl-opt' doesn't make
sense for your install. It can be removed.

And you can remove blacs/scalapack options aswell [If you are not
using mumps from PETSc - you don't need blacs/scalapack]. PETSc
primarily requires MPI and blas/lapack.

We generally recommend upgrading user codes to latest petsc versions
[it will have bugfixes for issues you might encounter]

Satish

On Thu, 23 Jan 2014, Tsien, Victor wrote:

> petsc group,
> 
> Can you help?  
> 
> Thanks,
> 
> Victor
> ________________________________________
> From: Mauri Ponga [mponga at caltech.edu]
> Sent: Wednesday, January 22, 2014 5:46 PM
> To: support at alcf.anl.gov
> Subject: [alcf-support #195845] PETSc compilation
> 
> User info for mponga at caltech.edu
> =================================
> Username:  ponga
> Full Name: Mauricio Ponga
> Projects:  ATPESC2013
>              ('*' denotes INCITE projects)
> =================================
> 
> 
> Hi,
> 
> I am trying to compile my program in VESTA. In order to compile it I need
> PETSc. However, I am using a old version (petsc-3.0.0-p8) which I am trying
> to configure and compile in my user account. I used the following options:
> 
> ./configure --prefix=/soft/libraries/petsc/3.3-p2/xl-opt --with-batch=1
> --with-blacs-include=/soft/libraries/alcf/current/xl/SCALAPACK/
> --with-blacs-lib=/soft/libraries/alcf/current/xl/SCALAPACK/lib/libscalapack.a
> --with-blas-lapack-lib="-L/soft/libraries/alcf/current/xl/LAPACK/lib
> -llapack -L/soft/libraries/alcf/current/xl/BLAS/lib -lblas"
> --with-cc=mpixlc_r --with-cxx=mpixlcxx_r --with-debugging=0
> --with-fc="mpixlf77_r -qnosave" --with-fortran-kernels=1
> --with-is-color-value-type=short
> --with-scalapack-include=/soft/libraries/alcf/current/xl/SCALAPACK/
> --with-scalapack-lib=/soft/libraries/alcf/current/xl/SCALAPACK/lib/libscalapack.a
> --with-shared-libraries=0 --with-x=0 -COPTFLAGS=" -O3 -qhot=level=0
> -qsimd=auto -qmaxmem=-1 -qstrict -qstrict_induction" -CXXOPTFLAGS=" -O3
> -qhot=level=0 -qsimd=auto -qmaxmem=-1 -qstrict -qstrict_induction"
> -FOPTFLAGS=" -O3 -qhot=level=0 -qsimd=auto -qmaxmem=-1 -qstrict
> -qstrict_induction" PETSC_ARCH=arch-bgq-ibm-opt with-mpi-shared=0
> 
> Configuration finished fine, but at the end of the script appear the
> following message
> 
> =================================================================================
> 
>     Since your compute nodes require use of a batch system or mpiexec you
> must:
>  1) Submit ./conftest to 1 processor of your batch system or system you
> are
>     cross-compiling for; this will generate the file
> reconfigure.py
>  2) Run ./reconfigure.py (to complete the configure
> process).
> =================================================================================
> 
> I tried to run the command ./conftest using jobscript.sh (in
> /home/ponga/petsc-3.0.0-p8) but the following message appears:
> 
> ponga is not a member of MM-MEDE
> Projects available: ATPESC2013
> For assistance, contact support at alcf.anl.gov
> Filter /soft/cobalt/scripts/clusterbank-account failed
> 
> 
> So, I wonder if you can change me to the project MM-MEDE which I am member
> (and also PI). Also I wonder if I am following the right steps to configure
> PETSc in my user account.
> 
> Thank you.
> 
> Mauricio Ponga
> 
> 



More information about the petsc-users mailing list