<html><head><meta http-equiv="Content-Type" content="text/html; charset=us-ascii"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" class=""><div class=""><br class=""></div>Indeed PCSetUp is taking most of the time (79%). In the version of PETSc you are running it is doing a great deal of the setup work on the CPU. You can see there is a lot of data movement between the CPU and GPU (in both directions) during the setup; 64 1.91e+03 54 1.21e+03 90<div class=""><br class=""></div><div class="">Clearly, we need help in porting all the parts of the GAMG setup that still occur on the CPU to the GPU.</div><div class=""><br class=""></div><div class=""> Barry</div><div class=""><br class=""><div class=""><br class=""></div><div class=""><br class=""><div><br class=""><blockquote type="cite" class=""><div class="">On Mar 22, 2022, at 12:07 PM, Qi Yang <<a href="mailto:qiyang@oakland.edu" class="">qiyang@oakland.edu</a>> wrote:</div><br class="Apple-interchange-newline"><div class=""><div dir="ltr" class="">Dear Barry,<div class=""><br class=""></div><div class="">Your advice is helpful, now the total time reduce from 30s to 20s(now all matrix run on gpu), actually I have tried other settings for amg predicontioner, seems not help that a lot, like -pc_gamg_threshold 0.05 -pc_gamg_threshold_scale 0.5.</div><div class="">it seems the key point is the PCSetup process, from the log, it takes the most time, and we can find from the new nsight system analysis, there is a big gap before the ksp solver starts, seems like the PCSetup process, not sure, am I right?</div><div class=""><span id="cid:ii_l12buewx4"><3.png></span><br class=""></div><div class=""><br class=""></div><div class="">PCSetUp 2 1.0 1.5594e+01 1.0 3.06e+09 1.0 0.0e+00 0.0e+00 0.0e+00 79 78 0 0 0 79 78 0 0 0 196 8433 64 1.91e+03 54 1.21e+03 90<br class=""></div><div class=""><br class=""></div><div class=""><br class=""></div><div class="">Regards,</div><div class="">Qi</div></div><br class=""><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, Mar 22, 2022 at 10:44 PM Barry Smith <<a href="mailto:bsmith@petsc.dev" class="">bsmith@petsc.dev</a>> wrote:<br class=""></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div style="overflow-wrap: break-word;" class=""><div class=""><br class=""></div> It is using <div class=""><br class=""></div><div class="">MatSOR 369 1.0 9.1214e+00 1.0 7.32e+09 1.0 0.0e+00 0.0e+00 0.0e+00 29 27 0 0 0 29 27 0 0 0 803 0 0 0.00e+00 565 1.35e+03 0</div><div class=""><br class=""></div><div class="">which runs on the CPU not the GPU hence the large amount of time in memory copies and poor performance. We are switching the default to be Chebyshev/Jacobi which runs completely on the GPU (may already be switched in the main branch). </div><div class=""><br class=""></div><div class="">You can run with <span style="font-family:Menlo;font-size:14px" class="">-mg_levels_pc_type</span><span style="font-family:Menlo;font-size:14px" class=""> jacobi</span> You should then see almost the entire solver running on the GPU.</div><div class=""><font face="Menlo" class=""><span style="font-size:14px" class=""><br class=""></span></font></div><div class="">You may need to tune the number of smoothing steps or other parameters of GAMG to get the faster solution time.</div><div class=""><br class=""></div><div class=""> Barry</div><div class=""><br class=""><div class=""><br class=""><blockquote type="cite" class=""><div class="">On Mar 22, 2022, at 10:30 AM, Qi Yang <<a href="mailto:qiyang@oakland.edu" target="_blank" class="">qiyang@oakland.edu</a>> wrote:</div><br class=""><div class=""><div dir="ltr" class=""><div dir="ltr" class=""><div dir="ltr" class="">To whom it may concern,<div class=""><div class=""><br class=""></div><div class="">I have tried petsc ex50(Possion) with cuda, ksp cg solver and gamg precondition, however, it run for about 30s. I also tried NVIDIA AMGX with the same solver and same grid (3000*3000), it only took 2s. I used nsight system software to analyze those two cases, found petsc took much time in the memory process (63% of total time, however, amgx only took 19%). Attached are screenshots of them.</div><div class=""><br class=""></div><div class="">The petsc command is : mpiexec -n 1 ./ex50 -da_grid_x 3000 -da_grid_y 3000 -ksp_type cg -pc_type gamg -pc_gamg_type agg -pc_gamg_agg_nsmooths 1 -vec_type cuda -mat_type aijcusparse -ksp_monitor -ksp_view -log-view </div><div class=""><br class=""></div><div class="">The log file is also attached.</div><div class=""><br class=""></div><div class="">Regards,</div><div class="">Qi</div><div class=""><br class=""></div><div class=""><span id="gmail-m_-5254694205004387314cid:ii_l1288l930" class=""><1.png></span><br class=""></div></div><div class=""><span id="gmail-m_-5254694205004387314cid:ii_l1288w5h1" class=""><2.png></span><br class=""></div></div></div></div>
<span id="gmail-m_-5254694205004387314cid:f_l128i6sx2" class=""><log.PETSc_cg_amg_ex50_gpu_cuda></span></div></blockquote></div><br class=""></div></div></blockquote></div>
<span id="cid:f_l12by7jf4"><log.PETSc_cg_amg_jacobi_ex50_gpu_cuda></span></div></blockquote></div><br class=""></div></div></body></html>