[petsc-users] PetscObjectGetComm

Junchao Zhang junchao.zhang at gmail.com
Wed Apr 22 11:03:33 CDT 2020


MPI_Comm are opaque handles in C and integers in Fortran, which is required
by MPI standard. The same applies to other types, like MPI_Op, MPI_Win etc.
MPICH and OpenMPI have different implements for the handles. In MPICH
handles are integer/bitfield, with some bits being offset to an array of
objects.  This makes it easy to do things like MPI_Comm_f2c(). In OpenMPI
handles are pointers. OpenMPI has to transform pointers to integer offsets
in MPI_Comm_c2f().

Running your tests with OpenMPI, you can see different pointers but same
offsets
test_comms.c:
0 4 -1419258464 909680992 909680992
1 4 1152255392 -2144517440 -2144517440
2 4 -306719328 768197312 768197312
3 4 -1766709856 715374384 715374384

test_comms.f90:
           0           0           3
           1           0           3
           2           0           3
           3           0           3

Running with MPICH, you can see C/Fortran MPI_Comm's are the same.  But why
ranks do not have the same integer/bitfield, I don't know. You need to dig
into mpich code.
test_comms.c:
0 4 1140850688 -2080374780 -2080374780
1 4 1140850688 -2080374780 -2080374780
2 4 1140850688 -2080374782 -2080374782
3 4 1140850688 -2080374782 -2080374782

test_comms.f90:
           0  1140850688 -2080374780
           1  1140850688 -2080374780
           2  1140850688 -2080374782
           3  1140850688 -2080374782

In summary, users should not expect MPI_Comm variables are equal across
ranks, and MPI_Send an MPI_Comm variable to remote ranks.
--Junchao Zhang


On Wed, Apr 22, 2020 at 8:56 AM Patrick Sanan <patrick.sanan at gmail.com>
wrote:

> Perhaps the confusion here is related to the fact that an MPI_Comm is not
> an integer identifying the communicator. Rather,
> it's a pointer to a data structure which contains information about the
> communicator (I'm not positive but probably something like this
> <https://github.com/pmodels/mpich/blob/master/src/include/mpir_comm.h#L150>
> ).
>
> You're converting that pointer to an int and printing it out. The value
> happens to be the same on all ranks except 0, but this
> doesn't directly tell you anything about equality of the MPI_comm objects
> that those pointers point to.
>
> Am Mi., 22. Apr. 2020 um 15:28 Uhr schrieb Matthew Knepley <
> knepley at gmail.com>:
>
>> On Wed, Apr 22, 2020 at 3:07 AM Marius Buerkle <mbuerkle at web.de> wrote:
>>
>>> I see, but I am still puzzeled, why are the communicators different on
>>> different notes eventhough it is the same object.
>>>
>>
>> This is the output of MPI_Comm_dup() on line 126 of tagm.c. Therefore,
>> dup comms are not guaranteed to have the same id
>> across multiple processes.
>>
>>   Thanks,
>>
>>      Matt
>>
>>
>>>
>>>
>>> PETSc creates a duplicate of the communicator during object creation.
>>>
>>> https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Sys/PetscCommDuplicate.html
>>>
>>> Jose
>>>
>>>
>>> > El 22 abr 2020, a las 8:40, Marius Buerkle <mbuerkle at web.de> escribió:
>>> >
>>> > Hi Dave,
>>> >
>>> > I want to use it in Fortran if possible. But I tried both C and
>>> Fortran just to see if it works in general. I am using MPICH 3.3.2. I
>>> attached the MWE for C and Fortran with the output I get.
>>> >
>>> > Marius
>>> >
>>> >
>>> >
>>> >
>>> >
>>> > Hi,
>>> >
>>> > What is PetscObjectGetComm expected to return?
>>> >
>>> > As Patrick said, it returns the communicator associated with the petsc
>>> object.
>>> >
>>> > I thought it would give the MPI communicator the object lives on. So
>>> if I create A matrix on PETSC_COMM_WORLD a call of PetscObjectGetComm for A
>>> it would return PETSC_COMM_WORLD? But it seems to return something else,
>>> and while most of the nodes return a similar communicator some are giving a
>>> different one.
>>> >
>>> > How are you actually comparing the communicators (send code snippet)?
>>> Which MPI implementation are you using? And when are comparing comms is the
>>> comparison code written in C it FORTRAN?
>>> >
>>> >
>>> > That said, is there a way to get the MPI communicator a matrix lives
>>> on?
>>> >
>>> > You are using the correct function. There is a macro as well but it’s
>>> best to use the function.
>>> >
>>> > Thanks,
>>> > Dave
>>> >
>>> >
>>> >
>>> >
>>> > Best,
>>> > Marius
>>> > <test_comm.tar.gz>
>>>
>>>
>>
>>
>> --
>> What most experimenters take for granted before they begin their
>> experiments is infinitely more interesting than any results to which their
>> experiments lead.
>> -- Norbert Wiener
>>
>> https://www.cse.buffalo.edu/~knepley/
>> <http://www.cse.buffalo.edu/~knepley/>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20200422/3af27fc3/attachment.html>


More information about the petsc-users mailing list