<div><div><br></div><div><br><div class="gmail_quote"></div></div></div><div><div dir="ltr">On Wed 22. Jan 2020 at 16:12, Felix Huber <<a href="mailto:st107539@stud.uni-stuttgart.de" target="_blank">st107539@stud.uni-stuttgart.de</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hello,<br>
<br>
I currently investigate why our code does not show the expected weak <br>
scaling behaviour in a CG solver. </blockquote><div dir="auto"><br></div></div><div><div dir="auto">Can you please send representative log files which characterize the lack of scaling (include the full log_view)?</div><div dir="auto"><br></div><div dir="auto">Are you using a KSP/PC configuration which should weak scale?</div><div dir="auto"><br></div><div dir="auto">ThanksĀ </div></div><div><div><div class="gmail_quote"><div dir="auto">Dave</div><div dir="auto"><br></div><div dir="auto"><br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Therefore I wanted to try out <br>
different communication methods for the VecScatter in the matrix-vector <br>
product. However, it seems like PETSc (version 3.7.6) always chooses <br>
either MPI_Alltoallv or MPI_Alltoallw when I pass different options via <br>
the PETSC_OPTIONS environment variable. Does anybody know, why this <br>
doesn't work as I expected?<br>
<br>
The matrix is a MPIAIJ matrix and created by a finite element <br>
discretization of a 3D Laplacian. Therefore it only communicates with <br>
'neighboring' MPI ranks. Not sure if it helps, but the code is run on a <br>
Cray XC40.<br>
<br>
I tried the `ssend`, `rsend`, `sendfirst`, `reproduce` and no options <br>
from <br>
<a href="https://www.mcs.anl.gov/petsc/petsc-3.7/docs/manualpages/Vec/VecScatterCreate.html" rel="noreferrer" target="_blank">https://www.mcs.anl.gov/petsc/petsc-3.7/docs/manualpages/Vec/VecScatterCreate.html</a> <br>
which all result in a MPI_Alltoallv. When combined with `nopack` the <br>
communication uses MPI_Alltoallw.<br>
<br>
Best regards,<br>
Felix<br>
<br>
</blockquote></div></div>
</div>