<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=us-ascii">
</head>
<body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" class="">
<div class=""><br class="">
</div>
<div>
<blockquote type="cite" class="">
<div class="">On Apr 3, 2017, at 1:44 PM, Justin Chang <<a href="mailto:jychang48@gmail.com" class="">jychang48@gmail.com</a>> wrote:</div>
<br class="Apple-interchange-newline">
<div class="">
<div dir="ltr" class="">
<div class="">
<div class="">
<div class="">
<div class="">
<div class="">
<div class="">Richard,<br class="">
<br class="">
</div>
This is what my job script looks like:<br class="">
<br class="">
#!/bin/bash<br class="">
#SBATCH -N 16<br class="">
#SBATCH -C knl,quad,flat<br class="">
#SBATCH -p regular<br class="">
#SBATCH -J knlflat1024<br class="">
#SBATCH -L SCRATCH<br class="">
#SBATCH -o knlflat1024.o%j<br class="">
#SBATCH --mail-type=ALL<br class="">
#SBATCH --mail-user=<a href="mailto:jychang48@gmail.com" class="">jychang48@gmail.com</a><br class="">
#SBATCH -t 00:20:00<br class="">
<br class="">
#run the application:<br class="">
cd $SCRATCH/Icesheet<br class="">
sbcast --compress=lz4 ./ex48cori /tmp/ex48cori<br class="">
srun -n 1024 -c 4 --cpu_bind=cores numactl -p 1 /tmp/ex48cori -M 128 -N 128 -P 16 -thi_mat_type baij -pc_type mg -mg_coarse_pc_type gamg -da_refine 1<br class="">
<br class="">
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</blockquote>
<div><br class="">
</div>
<div>Maybe it is a typo. It should be numactl -m 1.</div>
<div class=""><br class="">
</div>
<blockquote type="cite" class="">
<div class="">
<div dir="ltr" class="">
<div class="">
<div class="">
<div class="">
<div class="">According to the NERSC info pages, they say to add the "numactl" if using flat mode. Previously I tried cache mode but the performance seems to be unaffected.<br class="">
</div>
</div>
</div>
</div>
</div>
</div>
</blockquote>
<div><br class="">
</div>
<div>Using cache mode should give similar performance as using flat mode with the numactl option. But both approaches should be significant faster than using flat mode without the numactl option. I usually see over 3X speedup. You can also do such comparison
to see if the high-bandwidth memory is working properly.</div>
<br class="">
<blockquote type="cite" class="">
<div class="">
<div dir="ltr" class="">
<div class="">
<div class="">
<div class="">I also comparerd 256 haswell nodes vs 256 KNL nodes and haswell is nearly 4-5x faster. Though I suspect this drastic change has much to do with the initial coarse grid size now being extremely small.</div>
</div>
</div>
</div>
</div>
</blockquote>
<blockquote type="cite" class="">
<div class="">
<div dir="ltr" class="">
<div class="">
<div class="">I'll give the COPTFLAGS a try and see what happens<br class="">
</div>
</div>
</div>
</div>
</blockquote>
<div><br class="">
</div>
<div>Make sure to use --with-memalign=64 for data alignment when configuring PETSc.</div>
<div><br class="">
</div>
<div>The option -xMIC-AVX512 would improve the vectorization performance. But it may cause problems for the MPIBAIJ format for some unknown reason. MPIAIJ should work fine with this option.</div>
<div><br class="">
</div>
<div>Hong (Mr.)</div>
<br class="">
<blockquote type="cite" class="">
<div class="">
<div dir="ltr" class="">
<div class="">Thanks,<br class="">
</div>
Justin<br class="">
</div>
<div class="gmail_extra"><br class="">
<div class="gmail_quote">On Mon, Apr 3, 2017 at 1:36 PM, Richard Mills <span dir="ltr" class="">
<<a href="mailto:richardtmills@gmail.com" target="_blank" class="">richardtmills@gmail.com</a>></span> wrote:<br class="">
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div dir="ltr" class="">
<div class="">
<div class="">
<div class="">
<div class="">Hi Justin,<br class="">
<br class="">
</div>
How is the MCDRAM (on-package "high-bandwidth memory") configured for your KNL runs? And if it is in "flat" mode, what are you doing to ensure that you use the MCDRAM? Doing this wrong seems to be one of the most common reasons for unexpected poor performance
on KNL.<br class="">
<br class="">
</div>
<div class="">I'm not that familiar with the environment on Cori, but I think that if you are building for KNL, you should add "-xMIC-AVX512" to your compiler flags to explicitly instruct the compiler to use the AVX512 instruction set. I usually use something
along the lines of<br class="">
<br class="">
'COPTFLAGS=-g -O3 -fp-model fast -xMIC-AVX512'<br class="">
<br class="">
</div>
<div class="">(The "-g" just adds symbols, which make the output from performance profiling tools much more useful.)
<br class="">
</div>
<div class=""><br class="">
</div>
That said, I think that if you are comparing 1024 Haswell cores vs. 1024 KNL cores (so double the number of Haswell nodes), I'm not surprised that the simulations are almost twice as fast using the Haswell nodes. Keep in mind that individual KNL cores are
much less powerful than an individual Haswell node. You are also using roughly twice the power footprint (dual socket Haswell node should be roughly equivalent to a KNL node, I believe). How do things look on when you compare equal nodes?<br class="">
<br class="">
</div>
Cheers,<br class="">
</div>
Richard<br class="">
</div>
<div class="HOEnZb">
<div class="h5">
<div class="gmail_extra"><br class="">
<div class="gmail_quote">On Mon, Apr 3, 2017 at 11:13 AM, Justin Chang <span dir="ltr" class="">
<<a href="mailto:jychang48@gmail.com" target="_blank" class="">jychang48@gmail.com</a>></span> wrote:<br class="">
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div dir="ltr" class="">Hi all,
<div class=""><br class="">
</div>
<div class="">On NERSC's Cori I have the following configure options for PETSc:</div>
<div class=""><br class="">
</div>
<div class="">./configure --download-fblaslapack --with-cc=cc --with-clib-autodetect=0 --with-cxx=CC --with-cxxlib-autodetect=0 --with-debugging=0 --with-fc=ftn --with-fortranlib-autodetect=0 --with-mpiexec=srun --with-64-bit-indices=1 COPTFLAGS=-O3 CXXOPTFLAGS=-O3
FOPTFLAGS=-O3 PETSC_ARCH=arch-cori-opt</div>
<div class=""><br class="">
</div>
<div class="">Where I swapped out the default Intel programming environment with that of Cray (e.g., 'module switch PrgEnv-intel/6.0.3 PrgEnv-cray/6.0.3'). I want to document the performance difference between Cori's Haswell and KNL processors.</div>
<div class=""><br class="">
</div>
<div class="">When I run a PETSc example like SNES ex48 on 1024 cores (32 Haswell and 16 KNL nodes), the simulations are almost twice as fast on Haswell nodes. Which leads me to suspect that I am not doing something right for KNL. Does anyone know what are
some "optimal" configure options for running PETSc on KNL?</div>
<div class=""><br class="">
</div>
<div class="">Thanks,</div>
<div class="">Justin</div>
</div>
</blockquote>
</div>
<br class="">
</div>
</div>
</div>
</blockquote>
</div>
<br class="">
</div>
</div>
</blockquote>
</div>
<br class="">
</body>
</html>