[mpich-discuss] Using __float128 with MPI_Accumulate()?

liu547479170 liu547479170 at 163.com
Wed Dec 28 02:19:21 CST 2011


HI ,sir
   You suggest  help me a lot ,thinks four you help !
    I was install my mpich2-1.4.1p1 like this
      ./configure  --prefix=/usr/local/mpich2    FC=/usr/local/intel/composer_xe_2011_sp1.7.256/bin/ia32/ifort    F77=/usr/local/intel /composer_xe_2011_sp1.7.256/bin/ia32/ifortt   CC=/usr/bin/gcc   CXX=/usr/bin/g++
      make
      make install
 This all work well ,but when I do follows,
         which mpd
        which mpiexec
        which mpirun
 I get the result as follows
           /usr/local/intel/composer_xe_2011_sp1.7.256/mpirt/bin/ia32/mpd
          /usr/local/bin/mpiexec
          /usr/local/bin/mpirun
  there are something different with  the path that others provide  ,other give the path that the above three path are  /usr/local/bin/***
  So ,what is the matter of my soft  ,what should i do to move my mpd into /usr/local/bin/mpd .
     By the way ,my OS is ubuntu 11.10 ,Intel fortran.



At 2011-12-26 07:33:14,"Jed Brown" <jedbrown at mcs.anl.gov> wrote:
GCC 4.6 introduced support for quad precision using the __float128 type. Since __float128 is not part of the MPI standard, PETSc defines new real and complex MPI_Datatypes for __float128 and we define our own MPI_SUM, MPI_MAX, etc. This works fine for "normal" reductions, but it's a problem with MPI_Accumulate() because the standard does not allow user-defined reduce operations.


Is there any way to work around this limitation other than to not use MPI_Accumulate() in any context that might some day be asked to work with __float128?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/mpich-discuss/attachments/20111228/b3dd3d9a/attachment.htm>


More information about the mpich-discuss mailing list