On Tue, Nov 8, 2011 at 4:42 AM, Robert Ellis <span dir="ltr"><<a href="mailto:Robert.Ellis@geosoft.com">Robert.Ellis@geosoft.com</a>></span> wrote:<br><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
<div lang="EN-CA" link="blue" vlink="purple">
<div>
<p class="MsoNormal">Hello Petsc Developers,<u></u><u></u></p>
<p class="MsoNormal"><u></u> <u></u></p>
<p class="MsoNormal">I have a predominantly Petsc application but for simplicity it uses a very few MPI_AllReduce calls. I am finding that the MPI_AllReduce operations are sometimes causing problems (appears to be semaphore time outs) if the interprocess communication
is slow. I never have any problem with the Petsc operations. Is it reasonable that Petsc would be more robust that MPI_AllReduce?</p></div></div></blockquote><div><br></div><div>No. There are a lot of Allreduce() calls in the source:</div>
<div><br></div><div> find src -name "*.c" | xargs grep MPI_Allreduce</div><div> </div><div> Matt</div><div><br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
<div lang="EN-CA" link="blue" vlink="purple"><div><p class="MsoNormal"><br></p><p class="MsoNormal"><u></u></p>
<p class="MsoNormal">Also, is the VecScatterCreateToAll set of operations the best way to replace the MPI_AllReduce?<u></u><u></u></p>
<p class="MsoNormal"><u></u> <u></u></p>
<p class="MsoNormal">Thanks for any advice,<u></u><u></u></p>
<p class="MsoNormal">Rob<u></u><u></u></p>
<p class="MsoNormal"><u></u> <u></u></p>
</div>
</div>
</blockquote></div><br><br clear="all"><div><br></div>-- <br>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>
-- Norbert Wiener<br>