<div dir="ltr">Thanks Jed. I had tried just over-preallocating the matrix (using 10 nnz per row) and that solved the problem. I'm not sure what was wrong with my initial preallocation, but it's probably likely that things weren't hanging but just moving very slowly.<div><br></div><div>Rohan</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sat, Dec 17, 2022 at 9:57 PM Jed Brown <<a href="mailto:jed@jedbrown.org">jed@jedbrown.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">I ran your code successfully with and without GPU-aware MPI. I see a bit of time in MatSetValue -- you can make it a bit faster using one MatSetValues call per row, but it's typical that assembling a matrix like this (sequentially on the host) will be more expensive than some unpreconditioned CG iterations (that don't come close to solving the problem -- use multigrid if you want to actually solve this problem).<br>
<br>
Rohan Yadav <<a href="mailto:rohany@alumni.cmu.edu" target="_blank">rohany@alumni.cmu.edu</a>> writes:<br>
<br>
> Hi,<br>
><br>
> I'm developing a microbenchmark that runs a CG solve using PETSc on a mesh<br>
> using a 5-point stencil matrix. My code (linked here:<br>
> <a href="https://github.com/rohany/petsc-pde-benchmark/blob/main/main.cpp" rel="noreferrer" target="_blank">https://github.com/rohany/petsc-pde-benchmark/blob/main/main.cpp</a>, only 120<br>
> lines) works on 1 GPU and has great performance. When I move to 2 GPUs, the<br>
> program appears to get stuck in the input generation. I've literred the<br>
> code with print statements and have found out the following clues:<br>
><br>
> * The first rank progresses through this loop:<br>
> <a href="https://github.com/rohany/petsc-pde-benchmark/blob/main/main.cpp#L44" rel="noreferrer" target="_blank">https://github.com/rohany/petsc-pde-benchmark/blob/main/main.cpp#L44</a>, but<br>
> then does not exit (it seems to get stuck right before rowStart == rowEnd)<br>
> * The second rank makes very few iterations through the loop for its<br>
> allotted rows.<br>
><br>
> Therefore, neither rank makes it to the call to MatAssemblyBegin.<br>
><br>
> I'm running the code using the following command line on the Summit<br>
> supercomputer:<br>
> ```<br>
> jsrun -n 2 -g 1 -c 1 -b rs -r 2<br>
> /gpfs/alpine/scratch/rohany/csc335/petsc-pde-benchmark/main -ksp_max_it 200<br>
> -ksp_type cg -pc_type none -ksp_atol 1e-10 -ksp_rtol 1e-10 -vec_type cuda<br>
> -mat_type aijcusparse -use_gpu_aware_mpi 0 -nx 8485 -ny 8485<br>
> ```<br>
><br>
> Any suggestions will be appreciated! I feel that I have applied many of the<br>
> common petsc optimizations of preallocating my matrix row counts, so I'm<br>
> not sure what's going on with this input generation.<br>
><br>
> Thanks,<br>
><br>
> Rohan Yadav<br>
</blockquote></div>