<div dir="ltr"><div>Jed,<br><br></div>Thank you very much. <br><br>They made some observations, and they might make some progresses. I at least can make some runs now. They also say that it is something about ordering/rendezvous. They said that there may be too many messages or too long messages or both.<br>
<br><br></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Wed, Oct 23, 2013 at 4:22 PM, Jed Brown <span dir="ltr"><<a href="mailto:jedbrown@mcs.anl.gov" target="_blank">jedbrown@mcs.anl.gov</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="im">Fande Kong <<a href="mailto:fd.kong@siat.ac.cn">fd.kong@siat.ac.cn</a>> writes:<br>
<br>
> Hi Barry,<br>
><br>
</div><div class="im">> I contacted the supercomputer center, and they asked me for a test case so<br>
> that they can forward it to IBM. Is it possible that we write a test case<br>
> only using MPI? It is not a good idea that we send them the whole petsc<br>
> code and my code.<br>
<br>
</div>This may be possible, but this smells of a message ordering/rendezvous<br>
problem, in which case you'll have to reproduce essentially the same<br>
communication pattern. The fact that you don't see the error sooner in<br>
your program execution (and that it doesn't affect lots of other people)<br>
indicates that the bug may be very specific to your communication<br>
pattern. In fact, it is likely that changing your element distribution<br>
algorithm, or some similar changes, may make the bug vanish.<br>
Reproducing all this matching context in a stand-alone code is likely to<br>
be quite a lot of effort.<br>
<br>
I would configure the system to dump core on errors and package up the<br>
test case.<br>
</blockquote></div><br></div>