<div class="gmail_quote">On Fri, Aug 26, 2011 at 04:19, Dominik Szczerba <span dir="ltr"><<a href="mailto:dominik@itis.ethz.ch">dominik@itis.ethz.ch</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
<div id=":3ao">I seem to have had a classical deadlock, A was being assembled while<br>
some threads lurked around elsewhere. Adding some barriers seems to<br>
fix the problem, at least with the cases I currently have.<br></div></blockquote><div><br></div><div>Barriers should never affect the correctness of a pure MPI code that doesn't do weird things like communicate through the filesystem. We use the barriers for debugging, but they can generally be removed once the underlying issue is sorted out.</div>
<div><br></div><div>Also, when you say "threads", are you referring to MPI processes, or are you using actual threads (e.g. pthreads or OpenMP)?</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
<div id=":3ao">
<br>
What I still miss is what would be the advantage of<br>
MPI_Barrier(((PetscObject)A)->comm) over<br>
MPI_Barrier(PETSC_COMM_WORLD).<br></div></blockquote></div><br><div>I don't know whether all processes on PETSC_COMM_WORLD are supposed to pass through this assembly. If A was on a subcommunicator, then only those processes should be calling assembly. Note that these communicators are the same for many users.</div>