[petsc-dev] [petsc-users] Bad memory scaling with PETSc 3.10
Victor Eijkhout
eijkhout at tacc.utexas.edu
Tue Mar 26 23:06:41 CDT 2019
On Mar 26, 2019, at 6:25 PM, Mark Adams via petsc-dev <petsc-dev at mcs.anl.gov<mailto:petsc-dev at mcs.anl.gov>> wrote:
/home1/04906/bonnheim/olympus-keaveny/Olympus/olympus.petsc-3.9.3.skx-cxx-O on a skx-cxx-O named c478-062.stampede2.tacc.utexas.edu<http://c478-062.stampede2.tacc.utexas.edu/> with 4800 processors, by bonnheim Fri Mar 15 04:48:27 2019
I see you’re still using a petsc that uses the reference blas/lapack and ethernet instead of Intel OPA:
Configure Options: --configModules=PETSc.Configure --optionsModule=config.compilerOptions --with-cc++=clang++ COPTFLAGS="-g -mavx2" CXXOPTFLAGS="-g -mavx2" FOPTFLAGS="-g -mavx2" --download-mpich=1 --download-hypre=1 --download-metis=1 --download-parmetis=1 --download-c2html=1 --download-ctetgen --download-p4est=1 --download-superlu_dist --download-superlu --download-triangle=1 --download-hdf5=1 --download-fblaslapack=1 --download-zlib --with-x=0 --with-debugging=0 PETSC_ARCH=skx-cxx-O --download-chaco --with-viewfromoptions=1
Working directory: /home1/04906/bonnheim/petsc-3.9.3
I’ve alerted you guys about this months ago.
Victor.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20190327/48f3ba94/attachment.html>
More information about the petsc-dev
mailing list