[petsc-dev] Detecting MPI_C_DOUBLE_COMPLEX when clanguage!='C'

Jed Brown jedbrown at mcs.anl.gov
Tue Apr 24 09:13:29 CDT 2012


On Tue, Apr 24, 2012 at 09:02, Jack Poulson <jack.poulson at gmail.com> wrote:

> On Tue, Apr 24, 2012 at 7:19 AM, Jed Brown <jedbrown at mcs.anl.gov> wrote:
>
>> On Tue, Apr 24, 2012 at 07:15, Matthew Knepley <knepley at gmail.com> wrote:
>>
>>> Of course don't fix it. Who do these guys work for, Sandia?
>>
>>
>> They could fix it as an extension in MPICH2, but it would have to be
>> added to the standard for us to rely on it. Considering that we still can't
>> use anything past MPI-1.1, we won't be able to remove the hacks for a long
>> time. This is not to say that it isn't worth fixing, just that your
>> daughter will be in college by the time the fix can be assumed by callers.
>>
>
> Why not just cast the std::complex<double>* buffer into a a double* buffer
> of twice the length and use MPI_SUM for the MPI_DOUBLE datatype? I have
> always found MPI's support for complex datatypes to be hopelessly buggy, so
> I play these tricks whenever possible. Is this what PETSc currently does
> for complex MPI_SUM?
>

PETSc configure checks for MPI_C_COMPLEX_DOUBLE when using C99 complex (but
not with C++ because, although the type exists in the header, it errors at
run-time on Windows). For C++ complex, we do not try to use an MPI built-in
(because the only way to get it is to use the deprecated C++ bindings),
instead we create our own MPI_Datatype to hold complex values and our own
MPI_Op that is capable of operating on complex values. This works fine for
collectives, but cannot be used for MPI_Accumulate. Therefore, anywhere
that we call MPI_Accumulate, we "translate" our MPI_Op to the appropriate
predefined MPI_Op.

After jumping through these hoops, at least the user can just always use
our MPI_Ops regardless of whether the interface is implemented using
collectives or one-sided.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20120424/8d94452d/attachment.html>


More information about the petsc-dev mailing list