<div>>Open MPI one-sided operations with datatypes still have known bugs. They</div><div>>have had bug report s with reduced test cases for several years now. They</div><div>>need to fix those bugs. Please let them know that you are also waiting...</div><div><br></div><div>>To work around that, and for other reasons, I will write a new SF</div><div>>implementation using point-to-point.</div><div><br></div><div>How long will it take for you to rewrite SF implementation using p2p? But now, I want to complete a project that needs my current code. Could you please tell how to work around the issue related with the function PetscSFReduceBegin? Or could you please first modify the PetscSFReduceBegin?</div><div><br></div><div>And there are other problems, when you write new SF. Maybe the DMComplex and PetscSection will be changed. Because both objects use the SF for communication.</div><div><br></div><div>>>On Sep 11, 2012 12:44 PM, "fdkong" <fd.kong at foxmail.com> wrote:</div><div>>
</div><div>> >Hi Matt,</div><div>>></div><div>> >Thanks. I guess there are two reasons:</div><div>>></div><div>> >(1) The MPI function MPI_Accumulate with operation MPI_RELACE is not</div><div>> >supported in the implementation of OpenMPI 1.4.3. or other OpenMPI versions.</div><div>>></div><div>>
> (2) The MPI function dose not accept the datatype MPIU_2INT, when we use</div><div>> >64-bit integers. But when we run on MPICH, it works well!</div><div>>></div><div>> >------------------</div><div>>> Fande Kong</div><div>> >ShenZhen Institutes of Advanced Technology</div><div>>> Chinese Academy of Sciences</div><div>>></div><div>>> **</div><div>>></div><div>>></div>