[petsc-users] PetscSFReduceBegin does not work correctly on openmpi-1.4.3with64 integers

Jed Brown jedbrown at mcs.anl.gov
Tue Sep 11 17:28:42 CDT 2012


Yes, use mpich or mvapich. I plan to do it sometime this fall, but it is
not my highest priority right now.
On Sep 11, 2012 4:35 PM, "fdkong" <fd.kong at siat.ac.cn> wrote:

> >Open MPI one-sided operations with datatypes still have known bugs. They
> >have had bug report s with reduced test cases for several years now. They
> >need to fix those bugs. Please let them know that you are also waiting...
>
> >To work around that, and for other reasons, I will write a new SF
> >implementation using point-to-point.
>
> How long will it take for you to rewrite SF implementation using p2p?  But
> now, I want to complete a project that needs my current code. Could you
> please tell how to work around the issue related with the
> function PetscSFReduceBegin?  Or could you please first modify the
> PetscSFReduceBegin?
>
> And there are other problems, when you write new SF. Maybe the DMComplex
> and PetscSection will be changed. Because both objects use the SF for
> communication.
>
> >>On Sep 11, 2012 12:44 PM, "fdkong" <fd.kong at foxmail.com> wrote:
> >
> > >Hi Matt,
> >>
> > >Thanks. I guess there are two reasons:
> >>
> > >(1) The MPI function MPI_Accumulate with operation MPI_RELACE is not
> > >supported in the implementation of OpenMPI 1.4.3. or other OpenMPI
> versions.
> >>
> > > (2) The MPI function dose not accept the datatype MPIU_2INT, when we
> use
> > >64-bit integers.  But when we run on MPICH, it works well!
> >>
> > >------------------
> >> Fande Kong
> > >ShenZhen Institutes of Advanced Technology
> >> Chinese Academy of Sciences
> >>
> >> **
> >>
> >>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120911/92f6bceb/attachment.html>


More information about the petsc-users mailing list