[mpich-discuss] Using __float128 with MPI_Accumulate()?

Pavan Balaji balaji at mcs.anl.gov
Sun Dec 25 23:53:11 CST 2011


You can write up a ticket and make the appropriate changes in the MPI 
standard.  We have an army going from Argonne to the Forum; someone can 
do the formal reading for you.  Dave wrote the ticket for the MPI-2.2 
datatype additions, so he'll know where exactly text needs to be 
added/modified.

  -- Pavan

On 12/25/2011 11:33 PM, Jed Brown wrote:
> On Sun, Dec 25, 2011 at 21:37, Pavan Balaji <balaji at mcs.anl.gov
> <mailto:balaji at mcs.anl.gov>> wrote:
>
>     I meant predefined datatype, not OP.  I believe the limitation in
>     this particular case is not the operations, just the datatypes, anyway.
>
>
> Yes, exactly. Is there anything I can do to help move this forward?
>
> We have had support for __float128 for about a year in PETSc and we can
> demonstrate applications where it is critical. I have some new parallel
> primitives that we would like to use more extensively in PETSc, but the
> most efficient (lowest setup cost) implementation relies on
> MPI_Accumulate().
>
>
> _______________________________________________
> mpich-discuss mailing list     mpich-discuss at mcs.anl.gov
> To manage subscription options or unsubscribe:
> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss

-- 
Pavan Balaji
http://www.mcs.anl.gov/~balaji


More information about the mpich-discuss mailing list