<div class="gmail_quote">On Tue, Apr 24, 2012 at 09:02, Jack Poulson <span dir="ltr"><<a href="mailto:jack.poulson@gmail.com">jack.poulson@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class="gmail_extra"><div><div class="h5">On Tue, Apr 24, 2012 at 7:19 AM, Jed Brown <span dir="ltr"><<a href="mailto:jedbrown@mcs.anl.gov" target="_blank">jedbrown@mcs.anl.gov</a>></span> wrote:<br><div class="gmail_quote">
<blockquote class="gmail_quote" style="margin:0pt 0pt 0pt 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div><div class="gmail_quote">On Tue, Apr 24, 2012 at 07:15, Matthew Knepley <span dir="ltr"><<a href="mailto:knepley@gmail.com" target="_blank">knepley@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0pt 0pt 0pt 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
Of course don't fix it. Who do these guys work for, Sandia?</blockquote></div><br></div><div>They could fix it as an extension in MPICH2, but it would have to be added to the standard for us to rely on it. Considering that we still can't use anything past MPI-1.1, we won't be able to remove the hacks for a long time. This is not to say that it isn't worth fixing, just that your daughter will be in college by the time the fix can be assumed by callers.</div>
</blockquote></div><br></div></div>Why not just cast the std::complex<double>* buffer into a a double* buffer of twice the length and use MPI_SUM for the MPI_DOUBLE datatype? I have always found MPI's support for complex datatypes to be hopelessly buggy, so I play these tricks whenever possible. Is this what PETSc currently does for complex MPI_SUM?</div>
</blockquote><div><br></div><div>PETSc configure checks for MPI_C_COMPLEX_DOUBLE when using C99 complex (but not with C++ because, although the type exists in the header, it errors at run-time on Windows). For C++ complex, we do not try to use an MPI built-in (because the only way to get it is to use the deprecated C++ bindings), instead we create our own MPI_Datatype to hold complex values and our own MPI_Op that is capable of operating on complex values. This works fine for collectives, but cannot be used for MPI_Accumulate. Therefore, anywhere that we call MPI_Accumulate, we "translate" our MPI_Op to the appropriate predefined MPI_Op.</div>
<div><br></div><div>After jumping through these hoops, at least the user can just always use our MPI_Ops regardless of whether the interface is implemented using collectives or one-sided.</div></div>