On Mon, Mar 7, 2011 at 8:20 AM, Gaurish Telang <span dir="ltr"><<a href="mailto:gaurish108@gmail.com">gaurish108@gmail.com</a>></span> wrote:<br><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
Hi,<div><br></div><div>I have been testing PETSc's scalability on clusters for matrices of sizes 2000, 10,000, uptill 60,000.</div></blockquote><div><br></div><div>1) These matrices are incredibly small. We usually recommend 10,000 unknowns/process for weak scaling. You</div>
<div> might get some benefit from a shared memory implementation on a multicore.</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;"><div>All I did was try to solve Ax=b for these matrices. I found that the solution time dips if I use upto 16 or 32 processors. However for a larger number of processors however the solution time seems to go up rather than down. IS there anyway I can make my code strongly scalable ?</div>
</blockquote><div><br></div><div>2) These are small enough that direct factorization should be the fastest alternative. I would try UMFPack, SuperLU, and MUMPS.</div><div><br></div><div> Matt</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
<div>I am measuring the total time (sec) and KSP_SOLVE time in the -log_summary output. Both times show the same behaviour described above. </div><div><br></div><font color="#888888"><div>Gaurish</div></font></blockquote>
</div>-- <br>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener<br>