<html><head><meta http-equiv="Content-Type" content="text/html; charset=us-ascii"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" class=""><div class=""><br class=""></div> Matt is correct, vectors are way too small.<div class=""><br class=""></div><div class=""> BTW: Now would be a good time to run some of the Report I benchmarks on Crusher to get a feel for the kernel launch times and performance on VecOps.</div><div class=""><br class=""></div><div class=""> Also Report 2.</div><div class=""><br class=""></div><div class=""> Barry</div><div class=""><br class=""><div><br class=""><blockquote type="cite" class=""><div class="">On Jan 21, 2022, at 7:58 PM, Matthew Knepley <<a href="mailto:knepley@gmail.com" class="">knepley@gmail.com</a>> wrote:</div><br class="Apple-interchange-newline"><div class=""><div dir="ltr" class=""><div dir="ltr" class="">On Fri, Jan 21, 2022 at 6:41 PM Mark Adams <<a href="mailto:mfadams@lbl.gov" class="">mfadams@lbl.gov</a>> wrote:<br class=""></div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr" class="">I am looking at performance of a CG/Jacobi solve on a 3D Q2 Laplacian (ex13) on one Crusher node (8 GPUs on 4 GPU sockets, MI250X or is it MI200?).<div class="">This is with a 16M equation problem. GPU-aware MPI and non GPU-aware MPI are similar (mat-vec is a little faster w/o, the total is about the same, call it noise)<br class=""><div class=""><br class=""></div><div class="">I found that MatMult was about 3x faster using 8 cores/GPU, that is all 64 cores on the node, then when using 1 core/GPU. With the same size problem of course.</div><div class="">I was thinking MatMult should be faster with just one MPI process. Oh well, worry about that later.</div><div class=""><br class=""></div><div class="">The bigger problem, and I have observed this to some extent with the Landau TS/SNES/GPU-solver on the V/A100s, is that the vector operations are expensive or crazy expensive.</div>You can see (attached) and the times here that the solve is dominated by not-mat-vec:</div><div class=""><br class=""><div class=""><span style="font-family:monospace" class="">------------------------------------------------------------------------------------------------------------------------</span><br class=""></div><div class=""><font face="monospace" class="">Event Count Time (sec) Flop --- Global --- --- Stage ---- <b class="">Total GPU </b> - CpuToGpu - - GpuToCpu - GPU<br class=""> Max Ratio Max Ratio Max Ratio Mess AvgLen Reduct %T %F %M %L %R %T %F %M %L %R <b class="">Mflop/s Mflop/s</b> Count Size Count Size %F<br class="">---------------------------------------------------------------------------------------------------------------------------------------------------------------<br class=""></font></div><div class=""><font face="monospace" class="">17:15 main= /gpfs/alpine/csc314/scratch/adams/petsc/src/snes/tests/data$ grep "MatMult 400" jac_out_00*5_8_gpuawaremp*<br class="">MatMult 400 1.0 <b class="">1.2507e+00</b> 1.3 1.34e+10 1.1 3.7e+05 1.6e+04 0.0e+00 1 55 62 54 0 27 91100100 0 <b class="">668874 0</b> 0 0.00e+00 0 0.00e+00 100<br class="">17:15 main= /gpfs/alpine/csc314/scratch/adams/petsc/src/snes/tests/data$ grep "KSPSolve 2" jac_out_001*_5_8_gpuawaremp*<br class="">KSPSolve 2 1.0 <b class="">4.4173e+00</b> 1.0 1.48e+10 1.1 3.7e+05 1.6e+04 1.2e+03 4 60 62 54 61 100100100100100 <b class="">208923 1094405</b> 0 0.00e+00 0 0.00e+00 100</font><br class=""></div></div><div class=""><font face="monospace" class=""><br class=""></font></div>Notes about flop counters here, <div class="">* that MatMult flops are not logged as GPU flops but something is logged nonetheless.<div class="">* The GPU flop rate is 5x the total flop rate in KSPSolve :\<br class=""><div class="">* I think these nodes have an FP64 peak flop rate of 200 Tflops, so we are at < 1%.</div></div></div></div></blockquote><div class=""><br class=""></div><div class="">This looks complicated, so just a single remark:</div><div class=""><br class=""></div><div class="">My understanding of the benchmarking of vector ops led by Hannah was that you needed to be much</div><div class="">bigger than 16M to hit peak. I need to get the tech report, but on 8 GPUs I would think you would be</div><div class="">at 10% of peak or something right off the bat at these sizes. Barry, is that right?</div><div class=""><br class=""></div><div class=""> Thanks,</div><div class=""><br class=""></div><div class=""> Matt</div><div class=""> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr" class=""><div class=""><div class=""><div class="">Anway, not sure how to proceed but I thought I would share.</div><div class="">Maybe ask the Kokkos guys if the have looked at Crusher.</div><div class=""><br class=""></div><div class="">Mark</div></div></div></div></blockquote></div>-- <br class=""><div dir="ltr" class="gmail_signature"><div dir="ltr" class=""><div class=""><div dir="ltr" class=""><div class=""><div dir="ltr" class=""><div class="">What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br class="">-- Norbert Wiener</div><div class=""><br class=""></div><div class=""><a href="http://www.cse.buffalo.edu/~knepley/" target="_blank" class="">https://www.cse.buffalo.edu/~knepley/</a><br class=""></div></div></div></div></div></div></div></div>
</div></blockquote></div><br class=""></div></body></html>