<div dir="ltr"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div style="margin:0px;font-stretch:normal;line-height:normal;font-family:"Helvetica Neue";min-height:14px"><br><br>
</div>
<div style="margin:0px;font-stretch:normal;line-height:normal;font-family:"Helvetica Neue"">
Would we instead just have 40 (or perhaps slightly fewer) MPI processes all sharing the GPUs? Surely this would be inefficient, and would PETSc distribute the work across all 4 GPUs, or would every process end out using a single GPU?</div></blockquote><div>See <a href="https://docs.olcf.ornl.gov/systems/summit_user_guide.html#volta-multi-process-service" target="_blank">https://docs.olcf.ornl.gov/systems/summit_user_guide.html#volta-multi-process-service</a>. </div></div></div></blockquote><div><br></div><div><div>I'll jump in here but I would recommend not worrying about the number of GPUs and MPI processes (and don't bother with OMP).</div><div><br></div><div>As the MPS link above shows MPS wants to allow for slicing/scheduling the GPU in space and/or time. That is, very flexible. One would hope that this will get better and adapt to new hardware and so that your code does not have to.</div><div><br></div><div>I would focus on getting as much parallelism in your code as possible. GPUs need a lot of threads to run well and with DNS you might have a chance to feed it properly, but I'd just try to get as much as you can.</div></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div>In some cases, we did see better performance with multiple mpi ranks/GPU than 1 rank/GPU. The optimal configuration depends on the code. Think two extremes: One code with work done all on GPU and the other all on CPU. Probably you only need 1 mpi rank/node for the former, but full ranks for the latter. </div><div></div></div></div></blockquote><div> </div><div>Another dimension is assuming all work is on the GPU, at least asymptotically, then it's a matter of how much parallelism you have. (OK, not that simple ...) At one extreme you have one giant GPU, in which case you probably want to use multiple ranks and hope MPS can slice the GPU up in space to make it look like multiple GPUs of the right size for me.</div><div><br></div><div>Anecdotally, I have a kernel that is a solver in velocity space that sits in a phase space application (configuration space X and velocity space V) with a tensor decomposition (so the solves between X and V are not coupled). My V space solver is expensive (maybe like complex chemistry in DNS that is independent of the spacial solver) and on smallish problems (less parallelism available) I see an increase of throughput of 5x in going from 1 to 7 cores (MPI ranks) / GPU (IBM/Nvidia, 42 cores and 6 GPU, per node), just running the same problem, embarrassingly parallel. I increased the problem work by 16x and I still got 3x throughput speedup going from 1 to 7 cores.</div><div><br></div></div></div>