[petsc-dev] PetscSF in Fortran
Barry Smith
bsmith at mcs.anl.gov
Sun Oct 22 21:04:01 CDT 2017
Adrian,
Sorry for the delay in generating the bindings for you. I have put them in the branch
barry/fortran-PetscSFBcastBegin
the example src/vec/sf/examples/tutorials/ex1f.F90 attempts to test them. Please let us know if this works for you and if you need anything else. They will only work for MPIU_INT and MPIU_REAL and MPIU_SCALAR because that is the only Fortran pointer arrays we support currently.
Barry
I was shocked, shocked to discover that it looks like PetscSF has essentially no bindings for Fortran. We can add more relatively easily if you need them. I don't understand how you could previously test your code given that PetscSFCreate() didn't even have a Fortran stub generated?
BTW: PETSc folks we don't even seem to support MPIU_INT and MPIU_REAL/SCALAR in Fortran? Making an issue.
> On Oct 19, 2017, at 10:29 PM, Jed Brown <jed at jedbrown.org> wrote:
>
> Adrian Croucher <a.croucher at auckland.ac.nz> writes:
>
>> hi
>>
>>
>> On 19/10/17 06:45, Matthew Knepley wrote:
>>> On Tue, Oct 17, 2017 at 11:35 PM, Adrian Croucher
>>> <a.croucher at auckland.ac.nz <mailto:a.croucher at auckland.ac.nz>> wrote:
>>>
>>>
>>> So, now I'm trying to add Fortran bindings for PetscSFBcastBegin()
>>> and PetscSFBcastEnd().
>>>
>>> From the C side I have added the following into
>>> src/vec/is/sf/interface/f90-custom/zsff90.c:
>>>
>>> PETSC_EXTERN void PETSC_STDCALL petscsfbcastbegin_(PetscSF *sf,
>>> MPI_Datatype *unit, F90Array1d *rptr, F90Array1d *lptr , int *ierr
>>> PETSC_F90_2PTR_PROTO(rptrd) PETSC_F90_2PTR_PROTO(lptrd))
>>> {
>>> PetscDataType ptype;
>>> const void* rootdata;
>>> void* leafdata;
>>>
>>> *ierr = PetscMPIDataTypeToPetscDataType(*unit, &ptype);if
>>> (*ierr) return;
>>> *ierr = F90Array1dAccess(rptr, ptype, (void**) &rootdata
>>> PETSC_F90_2PTR_PARAM(rptrd));if (*ierr) return;
>>> *ierr = F90Array1dAccess(lptr, ptype, (void**) &leafdata
>>> PETSC_F90_2PTR_PARAM(lptrd));if (*ierr) return;
>>>
>>> *ierr = PetscSFBcastBegin(*sf, *unit, rootdata, leafdata);
>>>
>>> }
>>>
>>> and similarly for petscsfbcastend_(). Does this look plausible?
>>>
>>> Then some wrappers need to be added to
>>> src/vec/f90-mod/petscis.h90. I am not sure how to do those.
>>>
>>> The difficulty is in declaring the arrays that are passed in,
>>> which can be of various types. In C they are declared as void*,
>>> but I'm not sure what to do with that in Fortran. I can't seem to
>>> find any other example wrappers in PETSc to model it on either.
>>> Any suggestions?
>>>
>>
>>
>> I think this is working now by just declaring those void* C variables as
>> type(*) in the Fortran interface, e.g.:
>>
>> Interface
>> Subroutine PetscSFBcastBegin(sf,unit,rarray,
>> & larray,ierr)
>> use petscisdef
>> PetscSF :: sf
>> PetscInt :: unit
>> type(*) :: rarray(:)
>> type(*) :: larray(:)
>> PetscErrorCode :: ierr
>> End Subroutine PetscSFBcastBegin
>> End Interface
>>
>> The only difficulty I have left with this is in the MPI_Datatype
>> variable. I'd forgotten that these datatypes are all different in C and
>> Fortran as well.
>>
>> I amended the C interface code to the following, to convert the Fortran
>> MPI datatype (actually an integer) to a C MPI_Datatype:
>>
>> PETSC_EXTERN void PETSC_STDCALL petscsfbcastbegin_(PetscSF *sf, MPI_Fint
>> *unit, F90Array1d *rptr, F90Array1d *lptr , int *ierr
>> PETSC_F90_2PTR_PROTO(rptrd) PETSC_F90_2PTR_PROTO(lptrd))
>> {
>> PetscDataType pdtype;
>> MPI_Datatype dtype;
>> const void* rootdata;
>> void* leafdata;
>>
>> dtype = MPI_Type_f2c(*unit);
>> *ierr = PetscMPIDataTypeToPetscDataType(dtype, &pdtype);if (*ierr)
>> return;
>> *ierr = F90Array1dAccess(rptr, pdtype, (void**) &rootdata
>> PETSC_F90_2PTR_PARAM(rptrd));if (*ierr) return;
>> *ierr = F90Array1dAccess(lptr, pdtype, (void**) &leafdata
>> PETSC_F90_2PTR_PARAM(lptrd));if (*ierr) return;
>>
>> *ierr = PetscSFBcastBegin(*sf, dtype, rootdata, leafdata);
>>
>> }
>>
>> The problem is this only seems to work if I declare the datatype in the
>> calling Fortran code to be of the appropriate C MPI datatype, e.g.
>> MPI_INT, rather than the corresponding Fortran datatype, e.g.
>> MPI_INTEGER (which causes PetscMPIDataTypeToPetscDataType() to fail, as
>> something weird gets passed in for dtype).
>>
>> I was expecting the opposite to be true. It doesn't seem right to have
>> to use the C datatypes in Fortran code (confusing if the Fortran
>> datatypes are used elsewhere). So I suspect I've messed something up.
>> Anyone have any ideas?
>
> I think PetscMPIDataTypeToPetscDataType should be extended to handle the
> Fortran types.
>
> BTW, this code looks very wrong to me:
>
> if (mtype == MPIU_INT) *ptype = PETSC_INT;
> else if (mtype == MPI_INT) *ptype = PETSC_INT;
>
> [...]
>
> else if (mtype == MPI_LONG) *ptype = PETSC_LONG;
>
>
> If PetscInt is 64-bit then MPI_INT (which corresponds to int) would be
> mapped to a 64-bit int. Those PETSc datatypes either need to be
> abandoned (I think MPI types would do) or we need a terminology to
> distinguish PetscInt from int.
More information about the petsc-dev
mailing list