<html><head><meta http-equiv="content-type" content="text/html; charset=utf-8"></head><body style="overflow-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;"><br><div><br><blockquote type="cite"><div>On Feb 22, 2023, at 2:19 PM, Paul Grosse-Bley <paul.grosse-bley@ziti.uni-heidelberg.de> wrote:</div><br class="Apple-interchange-newline"><div>Hi again,<br><br>after checking with -ksp_monitor for PCMG, it seems my assumption that I could reset the solution by calling KSPSetComputeInitialGuess and then KSPSetupwas generally wrong and BoomerAMG was just the only preconditioner that cleverly stops doing work when the residual is already converged (which then caused me to find the shrinking residual and to think that it was doing something different to the other methods).<br></div></blockquote><div><br></div> Depending on your example the KSPSetComputeInitialGuess usage might not work. I suggest not using it for your benchmarking.</div><div><br></div><div> But if you just call KSPSolve() multiple times it will zero the solution at the beginning of each KSPSolve() (unless you use KSPSetInitialGuessNonzero()). So if you want to run multiple solves for testing you should not need to do anything. This will be true for any use of KSPSolve() whether the PC is from PETSc, hypre or NVIDIA.</div><div><br></div><div> <br><blockquote type="cite"><div><br>So, how can I get KSP to use the function given through KSPSetComputeInitialGuess to reset the solution vector (without calling KSPReset which would add a lot of overhead, I assume)?<br><br>Best,<br>Paul Große-Bley<br><br><br>On Wednesday, February 22, 2023 19:46 CET, Mark Adams <mfadams@lbl.gov> wrote:<br> <blockquote type="cite" cite="x-msg://7/CADOhEh4oA-ADa00Hy+sZ3SauTCorb3-xSWc1eWVaarxP2_jj+w@mail.gmail.com"><div dir="ltr">OK, Nsight Systems is a good way to see what is going on.<div> </div><div>So all three of your solvers are not traversing the MG hierching with the correct logic.</div><div>I don't know about hypre but PCMG and AMGx are pretty simple and AMGx dives into the AMGx library directly from out interface.<div>Some things to try:</div></div><div>* Use -options_left to make sure your options are being used (eg, spelling mistakes)</div><div>* Use -ksp_view to see a human readable list of your solver parameters.</div><div>* Use -log_trace to see if the correct methods are called.</div><div> - PCMG calls PCMGMCycle_Private for each of the cycle in code like:</div><div> for (i = 0; i < mg->cyclesperpcapply; i++) PetscCall(PCMGMCycle_Private(pc, mglevels + levels - 1, transpose, matapp, NULL));</div><div>- AMGx is called PCApply_AMGX and then it dives into the library. See where these three calls to AMGx are called from.</div><div> </div><div>Mark</div></div> <div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, Feb 22, 2023 at 1:10 PM Paul Grosse-Bley <<a href="mailto:paul.grosse-bley@ziti.uni-heidelberg.de">paul.grosse-bley@ziti.uni-heidelberg.de</a>> wrote:</div><blockquote class="gmail_quote" style="margin: 0px 0px 0px 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex; --darkreader-inline-border-left: #3e4446;" data-darkreader-inline-border-left="">Hi Mark,<br><br>I use Nvidia Nsight Systems with --trace cuda,nvtx,osrt,cublas-verbose,cusparse-verbose together with the NVTX markers that come with -log_view. I.e. I get a nice view of all cuBLAS and cuSPARSE calls (in addition to the actual kernels which are not always easy to attribute). For PCMG and PCGAMG I also use -pc_mg_log for even more detailed NVTX markers.<br><br>The "signature" of a V-cycle in PCMG, PCGAMG and PCAMGX is pretty clear because kernel runtimes on coarser levels are much shorter. At the coarsest level, there normally isn't even enough work for the GPU (Nvidia A100) to be fully occupied which is also visible in Nsight Systems.<br><br>I run only a single MPI rank with a single GPU, so profiling is straighforward.<br><br>Best,<br>Paul Große-Bley<br><br>On Wednesday, February 22, 2023 18:24 CET, Mark Adams <<a target="_blank" href="mailto:mfadams@lbl.gov">mfadams@lbl.gov</a>> wrote:<br> <blockquote type="cite" cite="http://CADOhEh62f%3DKf6mo0PNkBeGXSrsLpV8HQ8hL2%3Ddnc9ov-mbzsug@mail.gmail.com/"><div dir="ltr"><div dir="ltr"> </div> <div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, Feb 22, 2023 at 11:15 AM Paul Grosse-Bley <<a target="_blank" href="mailto:paul.grosse-bley@ziti.uni-heidelberg.de">paul.grosse-bley@ziti.uni-heidelberg.de</a>> wrote:</div><blockquote class="gmail_quote" style="margin: 0px 0px 0px 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex; --darkreader-inline-border-left: #3e4446;" data-darkreader-inline-border-left="">Hi Barry,<br><br>after using VecCUDAGetArray to initialize the RHS, that kernel still gets called as part of KSPSolve instead of KSPSetup, but its runtime is way less significant than the cudaMemcpy before, so I guess I will leave it like this. Other than that I kept the code like in my first message in this thread (as you wrote, benchmark_ksp.c is not well suited for PCMG).<br><br>The profiling results for PCMG and PCAMG look as I would expect them to look, i.e. one can nicely see the GPU load/kernel runtimes going down and up again for each V-cycle.<br><br>I was wondering about -pc_mg_multiplicative_cycles as it does not seem to make any difference. I would have expected to be able to increase the number of V-cycles per KSP iteration, but I keep seeing just a single V-cycle when changing the option (using PCMG).</blockquote><div> </div><div>How are you seeing this? </div><div>You might try -log_trace to see if you get two V cycles.</div><div> </div><blockquote class="gmail_quote" style="margin: 0px 0px 0px 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex; --darkreader-inline-border-left: #3e4446;" data-darkreader-inline-border-left=""><br>When using BoomerAMG from PCHYPRE, calling KSPSetComputeInitialGuess between bench iterations to reset the solution vector does not seem to work as the residual keeps shrinking. Is this a bug? Any advice for working around this?<br> </blockquote><div> </div><div>Looking at the doc <a target="_blank" href="https://petsc.org/release/docs/manualpages/KSP/KSPSetComputeInitialGuess/">https://petsc.org/release/docs/manualpages/KSP/KSPSetComputeInitialGuess/</a> you use this with KSPSetComputeRHS.</div><div> </div><div>In src/snes/tests/ex13.c I just zero out the solution vector.</div><div> </div><blockquote class="gmail_quote" style="margin: 0px 0px 0px 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex; --darkreader-inline-border-left: #3e4446;" data-darkreader-inline-border-left="">The profile for BoomerAMG also doesn't really show the V-cycle behavior of the other implementations. Most of the runtime seems to go into calls to cusparseDcsrsv which might happen at the different levels, but the runtime of these kernels doesn't show the V-cycle pattern. According to the output with -pc_hypre_boomeramg_print_statistics it is doing the right thing though, so I guess it is alright (and if not, this is probably the wrong place to discuss it).</blockquote><blockquote class="gmail_quote" style="margin: 0px 0px 0px 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex; --darkreader-inline-border-left: #3e4446;" data-darkreader-inline-border-left=""><br>When using PCAMGX, I see two PCApply (each showing a nice V-cycle behavior) calls in KSPSolve (three for the very first KSPSolve) while expecting just one. Each KSPSolve should do a single preconditioned Richardson iteration. Why is the preconditioner applied multiple times here?<br> </blockquote><div> </div><div>Again, not sure what "see" is, but PCAMGX is pretty new and has not been used much.</div><div>Note some KSP methods apply to the PC before the iteration.</div><div> </div><div>Mark </div><div> </div><blockquote class="gmail_quote" style="margin: 0px 0px 0px 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex; --darkreader-inline-border-left: #3e4446;" data-darkreader-inline-border-left="">Thank you,<br>Paul Große-Bley<br><br><br>On Monday, February 06, 2023 20:05 CET, Barry Smith <<a target="_blank" href="mailto:bsmith@petsc.dev">bsmith@petsc.dev</a>> wrote:<br> <blockquote type="cite" cite="http://57A3624B-DAAA-4D4F-9EA0-7F4BEED7C9C4@petsc.dev/"> </blockquote><div> </div> It should not crash, take a look at the test cases at the bottom of the file. You are likely correct if the code, unfortunately, does use DMCreateMatrix() it will not work out of the box for geometric multigrid. So it might be the wrong example for you.<div> </div><div> I don't know what you mean about clever. If you simply set the solution to zero at the beginning of the loop then it will just do the same solve multiple times. The setup should not do much of anything after the first solver. Thought usually solves are big enough that one need not run solves multiple times to get a good understanding of their performance.</div><div> </div><div> </div><div> <div> </div><div> <div> <blockquote type="cite"><div>On Feb 6, 2023, at 12:44 PM, Paul Grosse-Bley <<a target="_blank" href="mailto:paul.grosse-bley@ziti.uni-heidelberg.de">paul.grosse-bley@ziti.uni-heidelberg.de</a>> wrote:</div> <div>Hi Barry,<br><br>src/ksp/ksp/tutorials/bench_kspsolve.c is certainly the better starting point, thank you! Sadly I get a segfault when executing that example with PCMG and more than one level, i.e. with the minimal args:<br><br>$ mpiexec -c 1 ./bench_kspsolve -split_ksp -pc_type mg -pc_mg_levels 2<br>===========================================<br>Test: KSP performance - Poisson<br> Input matrix: 27-pt finite difference stencil<br> -n 100<br> DoFs = 1000000<br> Number of nonzeros = 26463592<br><br>Step1 - creating Vecs and Mat...<br>Step2a - running PCSetUp()...<br>[0]PETSC ERROR: ------------------------------------------------------------------------<br>[0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range<br>[0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger<br>[0]PETSC ERROR: or see <a target="_blank" href="https://petsc.org/release/faq/#valgrind">https://petsc.org/release/faq/#valgrind</a> and <a target="_blank" href="https://petsc.org/release/faq/">https://petsc.org/release/faq/</a><br>[0]PETSC ERROR: or try <a target="_blank" href="https://docs.nvidia.com/cuda/cuda-memcheck/index.html">https://docs.nvidia.com/cuda/cuda-memcheck/index.html</a> on NVIDIA CUDA systems to find memory corruption errors<br>[0]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run<br>[0]PETSC ERROR: to get more information on the crash.<br>[0]PETSC ERROR: Run with -malloc_debug to check if memory corruption is causing the crash.<br>--------------------------------------------------------------------------<br>MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD<br>with errorcode 59.<br><br>NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.<br>You may or may not see output from other processes, depending on<br>exactly when Open MPI kills them.<br>--------------------------------------------------------------------------<br><br>As the matrix is not created using DMDACreate3d I expected it to fail due to the missing geometric information, but I expected it to fail more gracefully than with a segfault.<br>I will try to combine bench_kspsolve.c with ex45.c to get easy MG preconditioning, especially since I am interested in the 7pt stencil for now.<br><br>Concerning my benchmarking loop from before: Is it generally discouraged to do this for KSPSolve due to PETSc cleverly/lazily skipping some of the work when doing the same solve multiple times or are the solves not iterated in bench_kspsolve.c (while the MatMuls are with -matmult) just to keep the runtime short?<br><br>Thanks,<br>Paul<br><br>On Monday, February 06, 2023 17:04 CET, Barry Smith <<a target="_blank" href="mailto:bsmith@petsc.dev">bsmith@petsc.dev</a>> wrote:<br> <blockquote type="cite"> </blockquote><div> </div> Paul,<div> </div><div> I think src/ksp/ksp/tutorials/benchmark_ksp.c is the code intended to be used for simple benchmarking. </div><div> </div><div> You can use VecCudaGetArray() to access the GPU memory of the vector and then call a CUDA kernel to compute the right hand side vector directly on the GPU.</div><div> </div><div> Barry</div><div> <div> <blockquote type="cite"><div>On Feb 6, 2023, at 10:57 AM, Paul Grosse-Bley <<a target="_blank" href="mailto:paul.grosse-bley@ziti.uni-heidelberg.de">paul.grosse-bley@ziti.uni-heidelberg.de</a>> wrote:</div> <div>Hi,<br><br>I want to compare different implementations of multigrid solvers for Nvidia GPUs using the poisson problem (starting from ksp tutorial example ex45.c).<br>Therefore I am trying to get runtime results comparable to <a target="_blank" href="https://bitbucket.org/nsakharnykh/hpgmg-cuda/src/master/">hpgmg-cuda</a> (finite-volume), i.e. using multiple warmup and measurement solves and avoiding measuring setup time.<br>For now I am using -log_view with added stages:<br><br>PetscLogStageRegister("Solve Bench", &solve_bench_stage);<br> for (int i = 0; i < BENCH_SOLVES; i++) {<br> PetscCall(KSPSetComputeInitialGuess(ksp, ComputeInitialGuess, NULL)); // reset x<br> PetscCall(KSPSetUp(ksp)); // try to avoid setup overhead during solve<br> PetscCall(PetscDeviceContextSynchronize(dctx)); // make sure that everything is done<br><br> PetscLogStagePush(solve_bench_stage);<br> PetscCall(KSPSolve(ksp, NULL, NULL));<br> PetscLogStagePop();<br> }<br><br>This snippet is preceded by a similar loop for warmup.<br><br>When profiling this using Nsight Systems, I see that the very first solve is much slower which mostly correspods to H2D (host to device) copies and e.g. cuBLAS setup (maybe also paging overheads as mentioned in the <a target="_blank" href="https://petsc.org/release/docs/manual/profiling/#accurate-profiling-and-paging-overheads">docs</a>, but probably insignificant in this case). The following solves have some overhead at the start from a H2D copy of a vector (the RHS I guess, as the copy is preceeded by a matrix-vector product) in the first MatResidual call (callchain: KSPSolve->MatResidual->VecAYPX->VecCUDACopyTo->cudaMemcpyAsync). My interpretation of the profiling results (i.e. cuBLAS calls) is that that vector is overwritten with the residual in Daxpy and therefore has to be copied again for the next iteration.<br><br>Is there an elegant way of avoiding that H2D copy? I have seen some examples on constructing matrices directly on the GPU, but nothing about vectors. Any further tips for benchmarking (vs profiling) PETSc solvers? At the moment I am using jacobi as smoother, but I would like to have a CUDA implementation of SOR instead. Is there a good way of achieving that, e.g. using PCHYPREs boomeramg with a single level and "SOR/Jacobi"-smoother as smoother in PCMG? Or is the overhead from constantly switching between PETSc and hypre too big?<br><br>Thanks,<br>Paul</div></blockquote></div></div></div></blockquote></div></div></div></blockquote></div></div></blockquote></blockquote></div></blockquote>
</div></blockquote></div><br></body></html>