[petsc-dev] PetscSFCount is not compatible with MPI_Count

Junchao Zhang junchao.zhang at gmail.com
Tue Mar 29 17:45:33 CDT 2022


On Tue, Mar 29, 2022 at 4:59 PM Satish Balay via petsc-dev <
petsc-dev at mcs.anl.gov> wrote:

> We do have such builds in CI - don't know why CI didn't catch it.
>
> $ grep with-64-bit-indices=1 *.py
> arch-ci-freebsd-cxx-cmplx-64idx-dbg.py:  '--with-64-bit-indices=1',
> arch-ci-linux-cuda-double-64idx.py:    '--with-64-bit-indices=1',
> arch-ci-linux-cxx-cmplx-pkgs-64idx.py:  '--with-64-bit-indices=1',
> arch-ci-linux-pkgs-64idx.py:  '--with-64-bit-indices=1',
> arch-ci-opensolaris-misc.py:  '--with-64-bit-indices=1',
>
> It implies these CI jobs do not have a recent MPI (like MPICH-4.x ) that
supports MPI-4 large count? It looks we need to have one.


>
> Satish
>
> On Tue, 29 Mar 2022, Fande Kong wrote:
>
> > OK, I attached the configure log here so that we have move information.
> >
> > I feel like we should do
> >
> > typedef MPI_Count PetscSFCount
> >
> > Do we have the target of 64-bit-indices with C++ in CI? I was
> > surprised that I am the only guy who saw this issue
> >
> > Thanks,
> >
> > Fande
> >
> > On Tue, Mar 29, 2022 at 2:50 PM Satish Balay <balay at mcs.anl.gov> wrote:
> >
> > > What MPI is this? How to reproduce?
> > >
> > > Perhaps its best if you can send the relevant logs.
> > >
> > > The likely trigger code in sfneighbor.c:
> > >
> > > >>>>
> > > /* A convenience temporary type */
> > > #if defined(PETSC_HAVE_MPI_LARGE_COUNT) &&
> defined(PETSC_USE_64BIT_INDICES)
> > >   typedef PetscInt     PetscSFCount;
> > > #else
> > >   typedef PetscMPIInt  PetscSFCount;
> > > #endif
> > >
> > > This change is at https://gitlab.com/petsc/petsc/-/commit/c87b50c4628
> > >
> > > Hm - if MPI supported LARGE_COUNT - perhaps it also provides a type
> that
> > > should go with it which we could use - instead of PetscInt?
> > >
> > >
> > > Perhaps it should be: "typedef log PetscSFCount;"
> > >
> > > Satish
> > >
> > >
> > > On Tue, 29 Mar 2022, Fande Kong wrote:
> > >
> > > > It seems correct according to
> > > >
> > > > #define PETSC_SIZEOF_LONG 8
> > > >
> > > > #define PETSC_SIZEOF_LONG_LONG 8
> > > >
> > > >
> > > > Can not convert from "non-constant" to "constant"?
> > > >
> > > > Fande
> > > >
> > > > On Tue, Mar 29, 2022 at 2:22 PM Fande Kong <fdkong.jd at gmail.com>
> wrote:
> > > >
> > > > > Hi All,
> > > > >
> > > > > When building PETSc with 64 bit indices, it seems that
> PetscSFCount is
> > > > > 64-bit integer while MPI_Count is still 32 bit.
> > > > >
> > > > > typedef long MPI_Count;
> > > > >
> > > > > typedef PetscInt   PetscSFCount;
> > > > >
> > > > >
> > > > >  I had the following errors. Do I have a bad MPI?
> > > > >
> > > > > Thanks,
> > > > >
> > > > > Fande
> > > > >
> > > > >
> > > > >
> > >
> Users/kongf/projects/moose6/petsc1/src/vec/is/sf/impls/basic/neighbor/sfneighbor.c:171:18:
> > > > > error: no matching function for call to 'MPI_Ineighbor_alltoallv_c'
> > > > >
> > > > >
> > >
> PetscCallMPI(MPIU_Ineighbor_alltoallv(rootbuf,dat->rootcounts,dat->rootdispls,unit,leafbuf,dat->leafcounts,dat->leafdispls,unit,distcomm,req));
> > > > >
> > > > >
> > >
> ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> > > > >
> > >
> /Users/kongf/projects/moose6/petsc1/include/petsc/private/mpiutils.h:97:79:
> > > > > note: expanded from macro 'MPIU_Ineighbor_alltoallv'
> > > > >   #define MPIU_Ineighbor_alltoallv(a,b,c,d,e,f,g,h,i,j)
> > > > >     MPI_Ineighbor_alltoallv_c(a,b,c,d,e,f,g,h,i,j)
> > > > >
> > > > >     ^~~~~~~~~~~~~~~~~~~~~~~~~
> > > > > /Users/kongf/projects/moose6/petsc1/include/petscerror.h:407:32:
> note:
> > > > > expanded from macro 'PetscCallMPI'
> > > > >     PetscMPIInt _7_errorcode = __VA_ARGS__;
> > > > >                      \
> > > > >                                ^~~~~~~~~~~
> > > > > /Users/kongf/mambaforge3/envs/moose/include/mpi_proto.h:945:5:
> note:
> > > > > candidate function not viable: no known conversion from
> 'PetscSFCount
> > > *'
> > > > > (aka 'long long *') to 'const MPI_Count *' (aka 'const long *')
> for 2nd
> > > > > argument
> > > > > int MPI_Ineighbor_alltoallv_c(const void *sendbuf, const MPI_Count
> > > > > sendcounts[],
> > > > >     ^
> > > > >
> > >
> /Users/kongf/projects/moose6/petsc1/src/vec/is/sf/impls/basic/neighbor/sfneighbor.c:195:18:
> > > > > error: no matching function for call to 'MPI_Ineighbor_alltoallv_c'
> > > > >
> > > > >
> > >
> PetscCallMPI(MPIU_Ineighbor_alltoallv(leafbuf,dat->leafcounts,dat->leafdispls,unit,rootbuf,dat->rootcounts,dat->rootdispls,unit,distcomm,req));
> > > > >
> > > > >
> > >
> ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> > > > >
> > >
> /Users/kongf/projects/moose6/petsc1/include/petsc/private/mpiutils.h:97:79:
> > > > > note: expanded from macro 'MPIU_Ineighbor_alltoallv'
> > > > >   #define MPIU_Ineighbor_alltoallv(a,b,c,d,e,f,g,h,i,j)
> > > > >     MPI_Ineighbor_alltoallv_c(a,b,c,d,e,f,g,h,i,j)
> > > > >
> > > > >     ^~~~~~~~~~~~~~~~~~~~~~~~~
> > > > > /Users/kongf/projects/moose6/petsc1/include/petscerror.h:407:32:
> note:
> > > > > expanded from macro 'PetscCallMPI'
> > > > >     PetscMPIInt _7_errorcode = __VA_ARGS__;
> > > > >                      \
> > > > >                                ^~~~~~~~~~~
> > > > > /Users/kongf/mambaforge3/envs/moose/include/mpi_proto.h:945:5:
> note:
> > > > > candidate function not viable: no known conversion from
> 'PetscSFCount
> > > *'
> > > > > (aka 'long long *') to 'const MPI_Count *' (aka 'const long *')
> for 2nd
> > > > > argument
> > > > > int MPI_Ineighbor_alltoallv_c(const void *sendbuf, const MPI_Count
> > > > > sendcounts[],
> > > > >     ^
> > > > >
> > >
> /Users/kongf/projects/moose6/petsc1/src/vec/is/sf/impls/basic/neighbor/sfneighbor.c:240:18:
> > > > > error: no matching function for call to 'MPI_Neighbor_alltoallv_c'
> > > > >
> > > > >
> > >
> PetscCallMPI(MPIU_Neighbor_alltoallv(rootbuf,dat->rootcounts,dat->rootdispls,unit,leafbuf,dat->leafcounts,dat->leafdispls,unit,comm));
> > > > >
> > > > >
> > >
> ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> > > > >
> > >
> /Users/kongf/projects/moose6/petsc1/include/petsc/private/mpiutils.h:96:79:
> > > > > note: expanded from macro 'MPIU_Neighbor_alltoallv'
> > > > >   #define MPIU_Neighbor_alltoallv(a,b,c,d,e,f,g,h,i)
> > > > >    MPI_Neighbor_alltoallv_c(a,b,c,d,e,f,g,h,i)
> > > > >
> > > > >     ^~~~~~~~~~~~~~~~~~~~~~~~
> > > > > /Users/kongf/projects/moose6/petsc1/include/petscerror.h:407:32:
> note:
> > > > > expanded from macro 'PetscCallMPI'
> > > > >     PetscMPIInt _7_errorcode = __VA_ARGS__;
> > > > >                      \
> > > > >                                ^~~~~~~~~~~
> > > > > /Users/kongf/mambaforge3/envs/moose/include/mpi_proto.h:1001:5:
> note:
> > > > > candidate function not viable: no known conversion from
> 'PetscSFCount
> > > *'
> > > > > (aka 'long long *') to 'const MPI_Count *' (aka 'const long *')
> for 2nd
> > > > > argument
> > > > > int MPI_Neighbor_alltoallv_c(const void *sendbuf, const MPI_Count
> > > > > sendcounts[],
> > > > >     ^
> > > > >
> > > > >
> > > > >
> > > >
> > >
> > >
> >
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20220329/6901fa09/attachment-0001.html>


More information about the petsc-dev mailing list