Yes. I haven't tested my code on MPICH. Because MPICH don't support infiniband in the supercomputer. But I have tested code on mvapich. It also has bugs, when the finite mesh is larger than 50M.<div><br></div><div>
Now, I just want to know if you will change to use point-point. If you would not, I will try to rewrite PetscSF for me.</div><div><br></div><div>Thanks,<br><br><div class="gmail_quote">On Fri, Nov 2, 2012 at 4:41 PM, Jed Brown <span dir="ltr"><<a href="mailto:jedbrown@mcs.anl.gov" target="_blank">jedbrown@mcs.anl.gov</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><p><br>
On Nov 2, 2012 5:24 PM, "Fande Kong" <<a href="mailto:fd.kong@siat.ac.cn" target="_blank">fd.kong@siat.ac.cn</a>> wrote:<br>
><br>
> I got it. You still didn't fix the bugs that I encountered a few months ago. All current MPI implementations couldn't support the function MPI_Accumulate or others well with one-sided communication when data is large.</p>
</div><p>I have not had problems with MPICH.<br>
</p>
</blockquote></div><br><br clear="all"><div><br></div>-- <br><div style="line-height:21px;font-size:14px;font-family:Verdana">Fande Kong</div><div style="line-height:21px;font-size:14px;font-family:Verdana">
ShenZhen Institutes of Advanced Technology</div><div style="line-height:21px;font-size:14px;font-family:Verdana">Chinese Academy of Sciences</div><br>
</div>