<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">On Sat, Mar 15, 2014 at 4:01 AM, Karl Rupp <span dir="ltr"><<a href="mailto:rupp@iue.tuwien.ac.at" target="_blank">rupp@iue.tuwien.ac.at</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">Hi William,<br>
<br>
I couldn't find something really suspicious in the logs, so the lack of scalability may be due to hardware limitations. Did you run all MPI processes on the same machine? How many CPU sockets? If it is a single-socket machine, chances are good that you saturate the memory channels pretty well with one process already. With higher process counts the cache per process is reduced, thus reducing cache reuse. This is the only reasonable explanation why the execution time for VecMDot goes up from e.g. 7 seconds for one and two processes to about 24 for four and eight processes.<br>
</blockquote><div><br></div><div><a href="http://www.mcs.anl.gov/petsc/documentation/faq.html#computers">http://www.mcs.anl.gov/petsc/documentation/faq.html#computers</a><br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
I suggest you try to run the same code across multiple machines if possible, you should see better scalability there. Also, for benchmarking purposes try to replace the ILU preconditioner with e.g. Jacobi, this should give you better scalability (provided that the solver still converges, of course...)<br>
</blockquote><div><br></div><div>BJacobi/ASM would be the next thing to try, since it would scale in terms of communication, but not</div><div>in terms of iterates. Eventually you will want a nice multilevel solver for your problem.</div>
<div><br></div><div> Matt</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
Best regards,<br>
Karli<div class=""><div class="h5"><br>
<br>
<br>
On 03/14/2014 10:45 PM, William Coirier wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
I've written a parallel, finite-volume, transient thermal conduction solver using PETSc primitives, and so far things have been going great. Comparisons to theory for a simple problem (transient conduction in a semi-infinite slab) looks good, but I'm not getting very good parallel scaling behavior with the KSP solver. Whether I use the default KSP/PC or other sensible combinations, the time spent in KSPSolve seems to not scale well at all.<br>
<br>
I seem to have loaded up the problem well enough. The PETSc logging/profiling has been really useful for reworking various code segments, and right now, the bottleneck is KSPSolve, and I can't seem to figure out how to get it to scale properly.<br>
<br>
I'm attaching output produced with -log_summary, -info, -ksp_view and -pc_view all specified on the command line for 1, 2, 4 and 8 processes.<br>
<br>
If you guys have any suggestions, I'd definitely like to hear them! And I apologize in advance if I've done something stupid. All the documentation has been really helpful.<br>
<br>
Thanks in advance...<br>
<br>
Bill Coirier<br>
<br>
------------------------------<u></u>------------------------------<u></u>------------------------------<u></u>--------------------------<br>
<br>
***NOTICE*** This e-mail and/or the attached documents may contain technical data within the definition of the International Traffic in Arms Regulations and/or Export Administration Regulations, and are subject to the export control laws of the U.S. Government. Transfer of this data by any means to a foreign person, whether in the United States or abroad, without an export license or other approval from the U.S. Department of State or Commerce, as applicable, is prohibited. No portion of this e-mail or its attachment(s) may be reproduced without written consent of Kratos Defense & Security Solutions, Inc. Any views expressed in this message are those of the individual sender, except where the message states otherwise and the sender is authorized to state them to be the views of any such entity. The information contained in this message and or attachments is intended only for the person or entity to which it is addressed and may contain confidential and/or privileged material. If you<br>
</blockquote>
are not the intended recipient or believe that you may have received this document in error, please notify the sender and delete this e-mail and any attachments immediately.<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<br>
</blockquote>
<br>
</div></div></blockquote></div><br><br clear="all"><div><br></div>-- <br>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>
-- Norbert Wiener
</div></div>