[petsc-dev] Moving from BG/L to BG/P

Satish Balay balay at mcs.anl.gov
Fri Oct 14 08:13:16 CDT 2011


On Thu, 13 Oct 2011, Jed Brown wrote:

> On Thu, Oct 13, 2011 at 23:01, <Kevin.Buckley at ecs.vuw.ac.nz> wrote:
> 
> > Hi there,
> >
> > Some time ago I built a PETSc (3.0.0-p9) on the BG/P down at NZ's
> > Univ of Canterbury, for a researcher here at VUW who wanted to
> > run the PISM ice-sheet modelling code on top of it, and in the
> > process discovered that the version installed by the sys admins
> > at the facility was buggy, so the researcher carried on compiling
> > later PISMs against my 3.0.0-p9.
> >
> > UoC are in the process of upgrading to a BG/P and the researcher
> > has asked me to see if I can coddle things together, is keen to
> > run with what he had before and is under "time pressure" to get
> > the remaining results needed for a paper, so is hoping I can
> > install something ahead of the facility's sys admins install
> > a system wide version.
> >
> > Notwithstanding the obvious upgrade in release of PETSc from our
> > 3.0 series to the current 3.2, I notice that the petsc-bgl-tools
> > wrapper package only (not surprisingly I guess, given the bgl bit)
> > provided for IBM 7.0 and 8.0 compiler suites, so, have you guys
> > tried out PETSc on a BG/P yet?
> >
> 
> BG/P has been out for a while, so of course people have been running PETSc
> on it for years. You can look at
> 
> config/examples/arch-bgp-ibm-opt.py
> 
> in the source tree. Here is a configuration that we used for some benchmarks
> on Shaheen last year:
> 
> Configure options: --with-x=0 --with-is-color-value-type=short
> --with-debugging=1 --with-fortran-kernels=1
> --with-mpi-dir=/bgsys/drivers/ppcfloor/comm --with-batch=1
> --known-mpi-shared-libraries=1 --known-memcmp-ok --known-sizeof-char=1
> --known-sizeof-void-p=4 --known-sizeof-short=2 --known-sizeof-int=4
> --known-sizeof-long=4 --known-sizeof-size_t=4 --known-sizeof-long-long=8
> --known-sizeof-float=4 --known-sizeof-double=8 --known-bits-per-byte=8
> --known-sizeof-MPI_Comm=4 --known-sizeof-MPI_Fint=4
> --known-mpi-long-double=1 --known-level1-dcache-assoc=0
> --known-level1-dcache-linesize=32 --known-level1-dcache-size=32768
> --download-hypre=1 --with-shared=0
> --prefix=/opt/share/ksl/petsc/dev-dec9-hypre/ppc450d-bgp_xlc_hypre_fast
> --with-clanguage=c --COPTFLAGS="     -O3 -qhot" --CXXOPTFLAGS="     -O3
> -qhot" --FOPTFLAGS="     -O3 -qhot" --LIBS="
> -L/bgsys/ibm_essl/sles10/prod/opt/ibmmath/lib -L/opt/ibmcmp/xlsmp/bg/1.7/lib
> -L/opt/ibmcmp/xlmass/bg/4.4/bglib -L/opt/ibmcmp/xlf/bg/11.1/bglib
> -L/bgsys/ibm_essl/sles10/prod/opt/ibmmath/lib -L/opt/ibmcmp/xlsmp/bg/1.7/lib
> -L/opt/ibmcmp/xlmass/bg/4.4/bglib -L/opt/ibmcmp/xlf/bg/11.1/bglib   -lesslbg
>  -lxlf90_r -lxlopt -lxlsmp -lxl -lxlfmath  -lesslbg  -lxlf90_r -lxlopt
> -lxlsmp -lxl -lxlfmath -O3"
> --CC=/bgsys/drivers/ppcfloor/comm/xl/bin/mpixlc_r
> --CXX=/bgsys/drivers/ppcfloor/comm/xl/bin/mpixlcxx_r
> --FC=/bgsys/drivers/ppcfloor/comm/xl/bin/mpixlf90_r --with-debugging=0
> PETSC_ARCH=bgp-xlc-hypre-fast

Just to clarify - petsc-bgl-tools were useful for the first generation
of BlueGene [bg/l] with the first generation of software stack - where
mpixlc/mpixlf77/mpixlf90 wrappers to use IBM compilers with MPI on
these machines were absent.

Subsequent machines [bg/p] and perhaps software stack update to bg/l
do provide proper mpixlc/mpixlf77/mpixlf90 warppers. [so
petsc-bgl-tools isn't needed anymore]

As Jed suggested - use config/examples/arch-bgp-ibm-opt.py - modify it
appropriately for the externalpackages requred - and run it.

Satish



More information about the petsc-dev mailing list