<div class="gmail_quote">On Wed, Sep 19, 2012 at 6:43 AM, Alexander Grayver <span dir="ltr"><<a href="mailto:agrayver@gfz-potsdam.de" target="_blank">agrayver@gfz-potsdam.de</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div bgcolor="#FFFFFF" text="#000000">On the order of 10^6 so far. <br></div></blockquote><div><br></div><div>So 10^3 vectors each of size 10^6 is already several GB which is larger than local (NUMA) memory for a cluster node (and larger than the entire memory of some nodes). The spill on rank 0 will cause lots memory allocated by rank 0 later to spill into the memory bus/NUMA region of other sockets/dies of the shared memory node (think of a four socket system, for example). This can easily slow the rest of your program down by a factor of 3 or more.</div>
<div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div bgcolor="#FFFFFF" text="#000000">
I guess memory has never been a problem because internally PETSc
used MatMatSolve_Basic for MatMatSolve with MUMPS. Thus you never
gather more than one rhs.</div></blockquote></div><br><div>Yup, likely.</div>