<html aria-label="message body"><head><meta http-equiv="content-type" content="text/html; charset=utf-8"></head><body style="overflow-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;"><div><br></div> The problem size is also very small. Typically one cannot get speedup when the number of variables per MPI rank is below on the order of 10,000. In your 64 process case you only have 390 variables. I would be stunned with any kind of speedup for such sizes. Run a problem at least 10 times bigger, better yet 20 times.<div><br id="lineBreakAtBeginningOfMessage"><div><br><blockquote type="cite"><div>On Feb 12, 2026, at 9:00 AM, Matthew Knepley <knepley@gmail.com> wrote:</div><br class="Apple-interchange-newline"><div><div dir="ltr"><div dir="ltr">On Thu, Feb 12, 2026 at 6:48 AM SCOTTO Alexandre via petsc-users <<a href="mailto:petsc-users@mcs.anl.gov">petsc-users@mcs.anl.gov</a>> wrote:</div><div class="gmail_quote gmail_quote_container"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div class="msg-8319069725590486549">
<div lang="EN-US">
<div class="m_-8319069725590486549WordSection1"><p class="MsoNormal">Dear PETSc community,<u></u><u></u></p><p class="MsoNormal"><u></u> <u></u></p><p class="MsoNormal">I have conducted a quick strong scalability-like test on direct and adjoint matrix-vector product with a 25,000 x 25,000 sparse matrix, distributed over 2, 4, …, 32 and 64 processes and the results I obtained were not so great.<u></u><u></u></p><p class="MsoNormal">I am not very confident in my setup, so a as a matter of reference, is there any available results on weak and strong scalability of PETSc.Mat mult() and multTranspose() operations?</p></div></div></div></blockquote><div><br></div><div>1. This behavior depends on available bandwidth, not on cores. Do you know the bandwidth for your configurations?</div><div><br></div><div>2. Strong scaling depends heavily on matrix sparsity. If inevitably declines, but slower with more work to do.</div><div><br></div><div>3. We published a paper on performance recently: <a href="https://urldefense.us/v3/__https://www.sciencedirect.com/science/article/abs/pii/S016781912100079X__;!!G_uCfscf7eWS!Zr5jUpk1srGDF2h9mXmw_GIn1OFZ2g3APzC0JHZREcxRzzy-2Oz2yyBWtzSI6F21kV4W_ubmc7A0NIIoVXnb$">https://www.sciencedirect.com/science/article/abs/pii/S016781912100079X</a></div><div><br></div><div> Thanks,</div><div><br></div><div> Matt </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div class="msg-8319069725590486549"><div lang="EN-US"><div class="m_-8319069725590486549WordSection1"><div> <br class="webkit-block-placeholder"></div><p class="MsoNormal">Best regards,<u></u><u></u></p><p class="MsoNormal">Alexandre.<u></u><u></u></p>
</div>
</div>
</div></blockquote></div><div><br clear="all"></div><div><br></div><span class="gmail_signature_prefix">-- </span><br><div dir="ltr" class="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener</div><div><br></div><div><a href="https://urldefense.us/v3/__http://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!Zr5jUpk1srGDF2h9mXmw_GIn1OFZ2g3APzC0JHZREcxRzzy-2Oz2yyBWtzSI6F21kV4W_ubmc7A0NEclfLI_$" target="_blank">https://www.cse.buffalo.edu/~knepley/</a><br></div></div></div></div></div></div></div></div>
</div></blockquote></div><br></div></body></html>