<html><head><meta http-equiv="Content-Type" content="text/html; charset=utf-8"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" class=""><div class=""><br class=""></div><div class=""><font face="Courier" size="2" class=""><span style="font-style: normal;" class="">------------------------------------------------------------------------------------------------------------------------</span></font></div><div class=""><font face="Courier" size="2" class=""><span style="font-style: normal;" class="">Event Count Time (sec) Flop --- Global --- --- Stage ---- Total GPU - CpuToGpu - - GpuToCpu - GPU</span></font></div><div class=""><font face="Courier" size="2" class=""><span style="font-style: normal;" class=""> Max Ratio Max Ratio Max Ratio Mess AvgLen Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s Mflop/s Count Size Count Size %F</span></font></div><div class=""><font face="Courier" size="2" class=""><span style="font-style: normal;" class="">---------------------------------------------------------------------------------------------------------------------------------------------------------------</span></font></div><div class=""><font face="Courier" size="2" class=""><span style="font-style: normal;" class=""><br class=""></span></font></div><div class=""><font face="Courier" size="2" class=""><span style="font-style: normal;" class="">--- Event Stage 0: Main Stage</span></font></div><div class=""><font face="Courier" size="2" class=""><span style="font-style: normal;" class=""><br class=""></span></font></div><div class=""><font face="Courier" size="2" class=""><span style="font-style: normal;" class="">KSPSolve 48 1.0 9.2492e+00 1.1 8.22e+08 1.2 5.9e+05 3.6e+03 1.3e+03 17 99 88 97 78 17 99 88 97 79 51484 1792674 446 1.73e+02 957 3.72e+02 100</span></font></div><div class=""><font face="Courier" size="2" class=""><span style="font-style: normal;" class="">KSPGMRESOrthog 306 1.1 2.2495e-01 1.5 3.86e+08 1.2 0.0e+00 0.0e+00 3.0e+02 0 46 0 0 18 0 46 0 0 18 973865 2562528 94 3.67e+01 0 0.00e+00 100</span></font></div><div class=""><font face="Courier" size="2" class=""><span style="font-style: normal;" class="">PCApply 354 1.1 5.7478e+00 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 11 0 0 0 0 11 0 0 0 0 0 0 0 0.00e+00 675 2.62e+02 0</span></font></div><div class=""><br class=""></div><div class="">It is GPU %F that is percent of the flops, not of the time. Since hypre does not count flops the only flops counted are in PETSc and the hypre ones, that take place on the CPU are not counted.</div><div class=""><br class=""></div><div class="">Note that PCApply takes 11 percent of the time, (this is hypre) and KSPSolve (which is hypre plus GMRES) takes 17 percent of the time. So 11/17 of the time is not on the GPU. Note also the huge number of copies to and from the GPU above, this is because data has to be moved to the CPU for hypre and then back to the GPU for PETSc.</div><div class=""><br class=""></div><div class=""> Barry</div><div class=""><br class=""></div><div><br class=""><blockquote type="cite" class=""><div class="">On Aug 4, 2020, at 5:46 AM, nicola varini <<a href="mailto:nicola.varini@gmail.com" class="">nicola.varini@gmail.com</a>> wrote:</div><br class="Apple-interchange-newline"><div class=""><div dir="ltr" class="">Thanks for your reply Stefano. I know that HYPRE is not ported on GPU, but the Solver is running on GPU and is taking ~9s and is showing 100% of GPU utilization. <br class=""></div><br class=""><div class="gmail_quote"><div dir="ltr" class="gmail_attr">Il giorno mar 4 ago 2020 alle ore 12:35 Stefano Zampini <<a href="mailto:stefano.zampini@gmail.com" target="_blank" class="">stefano.zampini@gmail.com</a>> ha scritto:<br class=""></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div class=""><div class="">Nicola,</div><div class=""><br class=""></div><div class="">You are actually not using the GPU properly, since you use HYPRE preconditioning, which is CPU only. One of your solvers is actually slower on “GPU”.</div><div class="">For a full AMG GPU, you can use PCGAMG, with cheby smoothers and with Jacobi preconditioning. Mark can help you out with the specific command line options.</div><div class="">When it works properly, everything related to PC application is offloaded to the GPU, and you should expect to get the well-known and branded 10x (maybe more) speedup one is expecting from GPUs during KSPSolve</div><div class=""><br class=""></div><div class="">Doing what you want to do is one of the last optimization steps of an already optimized code before entering production. Yours is not even optimized for proper GPU usage yet.</div><div class="">Also, any specific reason why you are using dgmres and fgmres?</div><div class=""><br class=""></div><div class="">PETSc has not been designed with multi-threading in mind. You can achieve “overlap” of the two solves by splitting the communicator. But then you need communications to let the two solutions talk to each other.</div><div class=""><br class=""></div><div class="">Thanks</div><div class="">Stefano</div><div class=""><br class=""><div class=""><br class=""><blockquote type="cite" class=""><div class="">On Aug 4, 2020, at 12:04 PM, nicola varini <<a href="mailto:nicola.varini@gmail.com" target="_blank" class="">nicola.varini@gmail.com</a>> wrote:</div><br class=""><div class=""><div dir="ltr" class=""><div class=""><div class=""><div class=""><div class=""><div class="">Dear all, thanks for your replies. The reason why I've asked if it is possible to overlap poisson and ampere is because they roughly<br class=""></div>take the same amount of time. Please find in attachment the profiling logs for only CPU and only GPU.<br class=""></div>Of course it is possible to split the MPI communicator and run each solver on different subcommunicator, however this would involve more communication.<br class=""></div>Did anyone ever tried to run 2 solvers with hyperthreading? <br class=""></div><div class="">Thanks<br class=""></div></div><div class=""><div class=""><div class=""><div class=""><br class=""></div></div></div></div></div><br class=""><div class="gmail_quote"><div dir="ltr" class="gmail_attr">Il giorno dom 2 ago 2020 alle ore 14:09 Mark Adams <<a href="mailto:mfadams@lbl.gov" target="_blank" class="">mfadams@lbl.gov</a>> ha scritto:<br class=""></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr" class="">I suspect that the Poisson and Ampere's law solve are not coupled. You might be able to duplicate the communicator and use two threads. You would want to configure PETSc with threadsafty and threads and I think it could/should work, but this mode is never used by anyone.<br class=""><div class=""><br class=""></div><div class="">That said, I would not recommend doing this unless you feel like playing in computer science, as opposed to doing application science. The best case scenario you get a speedup of 2x. That is a strict upper bound, but you will never come close to it. Your hardware has some balance of CPU to GPU processing rate. Your application has a balance of volume of work for your two solves. They have to be the same to get close to 2x speedup and that ratio(s) has to be 1:1. To be concrete, from what little I can guess about your applications let's assume that the cost of each of these two solves is about the same (eg, Laplacians on your domain and the best case scenario). But, GPU machines are configured to have roughly 1-10% of capacity in the GPUs, these days, that gives you an upper bound of about 10% speedup. That is noise. Upshot, unless you configure your hardware to match this problem, and the two solves have the same cost, you will not see close to 2x speedup. Your time is better spent elsewhere.</div><div class=""><br class=""></div><div class="">Mark</div></div><br class=""><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sat, Aug 1, 2020 at 3:24 PM Jed Brown <<a href="mailto:jed@jedbrown.org" target="_blank" class="">jed@jedbrown.org</a>> wrote:<br class=""></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">You can use MPI and split the communicator so n-1 ranks create a DMDA for one part of your system and the other rank drives the GPU in the other part. They can all be part of the same coupled system on the full communicator, but PETSc doesn't currently support some ranks having their Vec arrays on GPU and others on host, so you'd be paying host-device transfer costs on each iteration (and that might swamp any performance benefit you would have gotten).<br class="">
<br class="">
In any case, be sure to think about the execution time of each part. Load balancing with matching time-to-solution for each part can be really hard.<br class="">
<br class="">
<br class="">
Barry Smith <<a href="mailto:bsmith@petsc.dev" target="_blank" class="">bsmith@petsc.dev</a>> writes:<br class="">
<br class="">
> Nicola,<br class="">
><br class="">
> This is really viable or practical at this time with PETSc. It is not impossible but requires careful coding with threads, another possibility is to use one half of the virtual GPUs for each solve, this is also not trivial. I would recommend first seeing what kind of performance you can get on the GPU for each type of solve and revist this idea in the future.<br class="">
><br class="">
> Barry<br class="">
><br class="">
><br class="">
><br class="">
><br class="">
>> On Jul 31, 2020, at 9:23 AM, nicola varini <<a href="mailto:nicola.varini@gmail.com" target="_blank" class="">nicola.varini@gmail.com</a>> wrote:<br class="">
>> <br class="">
>> Hello, I would like to know if it is possible to overlap CPU and GPU with DMDA.<br class="">
>> I've a machine where each node has 1P100+1Haswell.<br class="">
>> I've to resolve Poisson and Ampere equation for each time step.<br class="">
>> I'm using 2D DMDA for each of them. Would be possible to compute poisson <br class="">
>> and ampere equation at the same time? One on CPU and the other on GPU?<br class="">
>> <br class="">
>> Thanks<br class="">
</blockquote></div>
</blockquote></div>
<span id="gmail-m_-3935710115496477991gmail-m_8625892341582166835cid:f_kdfro0bi0" class=""><out_gpu></span><span id="gmail-m_-3935710115496477991gmail-m_8625892341582166835cid:f_kdfro8hf1" class=""><out_nogpu></span></div></blockquote></div><br class=""></div></div></blockquote></div>
</div></blockquote></div><br class=""></body></html>