<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=us-ascii">
<style type="text/css" style="display:none;"> P {margin-top:0;margin-bottom:0;} </style>
</head>
<body dir="ltr">
<div style="font-family: Calibri, Arial, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);" class="elementToProof">
Hi Jed</div>
<div style="font-family: Calibri, Arial, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);" class="elementToProof">
<br>
</div>
<div style="font-family: Calibri, Arial, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);" class="elementToProof ContentPasted0">
Thanks for your reply. I have sent the log files to petsc-maint@mcs.anl.gov.</div>
<div style="font-family: Calibri, Arial, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);" class="elementToProof ContentPasted0">
<br>
</div>
<div style="font-family: Calibri, Arial, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);" class="elementToProof ContentPasted0">
Zisheng</div>
<div id="appendonsend"></div>
<hr style="display:inline-block;width:98%" tabindex="-1">
<div id="divRplyFwdMsg" dir="ltr"><font face="Calibri, sans-serif" style="font-size:11pt" color="#000000"><b>From:</b> Jed Brown <jed@jedbrown.org><br>
<b>Sent:</b> Tuesday, June 27, 2023 1:02 PM<br>
<b>To:</b> Zisheng Ye <zisheng.ye@ansys.com>; petsc-users@mcs.anl.gov <petsc-users@mcs.anl.gov><br>
<b>Subject:</b> Re: [petsc-users] GAMG and Hypre preconditioner</font>
<div> </div>
</div>
<div class="BodyFragment"><font size="2"><span style="font-size:11pt;">
<div class="PlainText">[External Sender]<br>
<br>
Zisheng Ye via petsc-users <petsc-users@mcs.anl.gov> writes:<br>
<br>
> Dear PETSc Team<br>
><br>
> We are testing the GPU support in PETSc's KSPSolve, especially for the GAMG and Hypre preconditioners. We have encountered several issues that we would like to ask for your suggestions.<br>
><br>
> First, we have couple of questions when working with a single MPI rank:<br>
><br>
> 1. We have tested two backends, CUDA and Kokkos. One commonly encountered error is related to SpGEMM in CUDA when the mat is large as listed below:<br>
><br>
> cudaMalloc((void **)&buffer2, bufferSize2) error( cudaErrorMemoryAllocation): out of memory<br>
><br>
> For CUDA backend, one can use "-matmatmult_backend_cpu -matptap_backend_cpu" to avoid these problems. However, there seems no equivalent options in Kokkos backend. Is there any good practice to avoid this error for both backends and if we can avoid this error
in Kokkos backend?<br>
<br>
Junchao will know more about KK tuning, but the faster GPU matrix-matrix algorithms use extra memory. We should be able to make the host option available with kokkos.<br>
<br>
> 2. We have tested the combination of Hypre and Kokkos as backend. It looks like this combination is not compatible with each other, as we observed that KSPSolve takes a greater number of iterations to exit, and the residual norm in the post-checking is
much larger than the one obtained when working with CUDA backend. This happens for matrices with block size larger than 1. Is there any explanation to the error?<br>
><br>
> Second, we have couple more questions when working with multiple MPI ranks:<br>
><br>
> 1. We are currently using OpenMPI as we couldnt get Intel MPI to work as a GPU-aware MPI, is this a known issue with Intel MPI?<br>
<br>
As far as I know, Intel's MPI is only for SYCL/Intel GPUs. In general, GPU-aware MPI has been incredibly flaky on all HPC systems despite being introduced ten years ago.<br>
<br>
> 2. With OpenMPI we currently see a slow down when increasing the MPI count as shown in the figure below, is this normal?<br>
<br>
Could you share -log_view output from a couple representative runs? You could send those here or to petsc-maint@mcs.anl.gov. We need to see what kind of work is not scaling to attribute what may be causing it.<br>
</div>
</span></font></div>
</body>
</html>