[petsc-dev] PetscSFCount is not compatible with MPI_Count

Junchao Zhang junchao.zhang at gmail.com
Tue Mar 29 21:46:25 CDT 2022


Yes, the problem is from " Mac + 64-bit indices + --with-clanguage=C++"
In Fande's case, PetscInt (int64_t) is  "long long int" but  MPI_Count is
"long int".  Though both are 64-bit, they are different in C++ function
resolution.

$ cat test.c
extern int foo(long long *);
void bar() {long a = 0; foo(&a);}

$ g++ -c test.c
clang: warning: treating 'c' input as 'c++' when in C++ mode, this behavior
is deprecated [-Wdeprecated]
test.c:2:25: error: no matching function for call to 'foo'
void bar() {long a = 0; foo(&a);}
                        ^~~
test.c:1:12: note: candidate function not viable: no known conversion from
'long *' to 'long long *' for 1st argument
extern int foo(long long *);

$ gcc -c test.c
test.c:2:29: warning: incompatible pointer types passing 'long *' to
parameter of type 'long long *' [-Wincompatible-pointer-types]
void bar() {long a = 0; foo(&a);}
                            ^~
test.c:1:27: note: passing argument to parameter here
extern int foo(long long *);
                          ^
1 warning generated.

I would just use MPI types for MPI arguments.  Also, it looks we need a
64-bit CI job on Mac.

--Junchao Zhang


On Tue, Mar 29, 2022 at 7:07 PM Satish Balay <balay at mcs.anl.gov> wrote:

> On Tue, 29 Mar 2022, Junchao Zhang wrote:
>
> > On Tue, Mar 29, 2022 at 4:59 PM Satish Balay via petsc-dev <
> > petsc-dev at mcs.anl.gov> wrote:
> >
> > > We do have such builds in CI - don't know why CI didn't catch it.
> > >
> > > $ grep with-64-bit-indices=1 *.py
> > > arch-ci-freebsd-cxx-cmplx-64idx-dbg.py:  '--with-64-bit-indices=1',
> > > arch-ci-linux-cuda-double-64idx.py:    '--with-64-bit-indices=1',
> > > arch-ci-linux-cxx-cmplx-pkgs-64idx.py:  '--with-64-bit-indices=1',
> > > arch-ci-linux-pkgs-64idx.py:  '--with-64-bit-indices=1',
> > > arch-ci-opensolaris-misc.py:  '--with-64-bit-indices=1',
> > >
> > > It implies these CI jobs do not have a recent MPI (like MPICH-4.x )
> that
> > supports MPI-4 large count? It looks we need to have one.
>
> And a Mac
>
> I can't reproduce on linux [even with latest clang]
>
> Satish
>
> >
> >
> > >
> > > Satish
> > >
> > > On Tue, 29 Mar 2022, Fande Kong wrote:
> > >
> > > > OK, I attached the configure log here so that we have move
> information.
> > > >
> > > > I feel like we should do
> > > >
> > > > typedef MPI_Count PetscSFCount
> > > >
> > > > Do we have the target of 64-bit-indices with C++ in CI? I was
> > > > surprised that I am the only guy who saw this issue
> > > >
> > > > Thanks,
> > > >
> > > > Fande
> > > >
> > > > On Tue, Mar 29, 2022 at 2:50 PM Satish Balay <balay at mcs.anl.gov>
> wrote:
> > > >
> > > > > What MPI is this? How to reproduce?
> > > > >
> > > > > Perhaps its best if you can send the relevant logs.
> > > > >
> > > > > The likely trigger code in sfneighbor.c:
> > > > >
> > > > > >>>>
> > > > > /* A convenience temporary type */
> > > > > #if defined(PETSC_HAVE_MPI_LARGE_COUNT) &&
> > > defined(PETSC_USE_64BIT_INDICES)
> > > > >   typedef PetscInt     PetscSFCount;
> > > > > #else
> > > > >   typedef PetscMPIInt  PetscSFCount;
> > > > > #endif
> > > > >
> > > > > This change is at
> https://gitlab.com/petsc/petsc/-/commit/c87b50c4628
> > > > >
> > > > > Hm - if MPI supported LARGE_COUNT - perhaps it also provides a type
> > > that
> > > > > should go with it which we could use - instead of PetscInt?
> > > > >
> > > > >
> > > > > Perhaps it should be: "typedef log PetscSFCount;"
> > > > >
> > > > > Satish
> > > > >
> > > > >
> > > > > On Tue, 29 Mar 2022, Fande Kong wrote:
> > > > >
> > > > > > It seems correct according to
> > > > > >
> > > > > > #define PETSC_SIZEOF_LONG 8
> > > > > >
> > > > > > #define PETSC_SIZEOF_LONG_LONG 8
> > > > > >
> > > > > >
> > > > > > Can not convert from "non-constant" to "constant"?
> > > > > >
> > > > > > Fande
> > > > > >
> > > > > > On Tue, Mar 29, 2022 at 2:22 PM Fande Kong <fdkong.jd at gmail.com>
> > > wrote:
> > > > > >
> > > > > > > Hi All,
> > > > > > >
> > > > > > > When building PETSc with 64 bit indices, it seems that
> > > PetscSFCount is
> > > > > > > 64-bit integer while MPI_Count is still 32 bit.
> > > > > > >
> > > > > > > typedef long MPI_Count;
> > > > > > >
> > > > > > > typedef PetscInt   PetscSFCount;
> > > > > > >
> > > > > > >
> > > > > > >  I had the following errors. Do I have a bad MPI?
> > > > > > >
> > > > > > > Thanks,
> > > > > > >
> > > > > > > Fande
> > > > > > >
> > > > > > >
> > > > > > >
> > > > >
> > >
> Users/kongf/projects/moose6/petsc1/src/vec/is/sf/impls/basic/neighbor/sfneighbor.c:171:18:
> > > > > > > error: no matching function for call to
> 'MPI_Ineighbor_alltoallv_c'
> > > > > > >
> > > > > > >
> > > > >
> > >
> PetscCallMPI(MPIU_Ineighbor_alltoallv(rootbuf,dat->rootcounts,dat->rootdispls,unit,leafbuf,dat->leafcounts,dat->leafdispls,unit,distcomm,req));
> > > > > > >
> > > > > > >
> > > > >
> > >
> ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> > > > > > >
> > > > >
> > >
> /Users/kongf/projects/moose6/petsc1/include/petsc/private/mpiutils.h:97:79:
> > > > > > > note: expanded from macro 'MPIU_Ineighbor_alltoallv'
> > > > > > >   #define MPIU_Ineighbor_alltoallv(a,b,c,d,e,f,g,h,i,j)
> > > > > > >     MPI_Ineighbor_alltoallv_c(a,b,c,d,e,f,g,h,i,j)
> > > > > > >
> > > > > > >     ^~~~~~~~~~~~~~~~~~~~~~~~~
> > > > > > >
> /Users/kongf/projects/moose6/petsc1/include/petscerror.h:407:32:
> > > note:
> > > > > > > expanded from macro 'PetscCallMPI'
> > > > > > >     PetscMPIInt _7_errorcode = __VA_ARGS__;
> > > > > > >                      \
> > > > > > >                                ^~~~~~~~~~~
> > > > > > > /Users/kongf/mambaforge3/envs/moose/include/mpi_proto.h:945:5:
> > > note:
> > > > > > > candidate function not viable: no known conversion from
> > > 'PetscSFCount
> > > > > *'
> > > > > > > (aka 'long long *') to 'const MPI_Count *' (aka 'const long *')
> > > for 2nd
> > > > > > > argument
> > > > > > > int MPI_Ineighbor_alltoallv_c(const void *sendbuf, const
> MPI_Count
> > > > > > > sendcounts[],
> > > > > > >     ^
> > > > > > >
> > > > >
> > >
> /Users/kongf/projects/moose6/petsc1/src/vec/is/sf/impls/basic/neighbor/sfneighbor.c:195:18:
> > > > > > > error: no matching function for call to
> 'MPI_Ineighbor_alltoallv_c'
> > > > > > >
> > > > > > >
> > > > >
> > >
> PetscCallMPI(MPIU_Ineighbor_alltoallv(leafbuf,dat->leafcounts,dat->leafdispls,unit,rootbuf,dat->rootcounts,dat->rootdispls,unit,distcomm,req));
> > > > > > >
> > > > > > >
> > > > >
> > >
> ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> > > > > > >
> > > > >
> > >
> /Users/kongf/projects/moose6/petsc1/include/petsc/private/mpiutils.h:97:79:
> > > > > > > note: expanded from macro 'MPIU_Ineighbor_alltoallv'
> > > > > > >   #define MPIU_Ineighbor_alltoallv(a,b,c,d,e,f,g,h,i,j)
> > > > > > >     MPI_Ineighbor_alltoallv_c(a,b,c,d,e,f,g,h,i,j)
> > > > > > >
> > > > > > >     ^~~~~~~~~~~~~~~~~~~~~~~~~
> > > > > > >
> /Users/kongf/projects/moose6/petsc1/include/petscerror.h:407:32:
> > > note:
> > > > > > > expanded from macro 'PetscCallMPI'
> > > > > > >     PetscMPIInt _7_errorcode = __VA_ARGS__;
> > > > > > >                      \
> > > > > > >                                ^~~~~~~~~~~
> > > > > > > /Users/kongf/mambaforge3/envs/moose/include/mpi_proto.h:945:5:
> > > note:
> > > > > > > candidate function not viable: no known conversion from
> > > 'PetscSFCount
> > > > > *'
> > > > > > > (aka 'long long *') to 'const MPI_Count *' (aka 'const long *')
> > > for 2nd
> > > > > > > argument
> > > > > > > int MPI_Ineighbor_alltoallv_c(const void *sendbuf, const
> MPI_Count
> > > > > > > sendcounts[],
> > > > > > >     ^
> > > > > > >
> > > > >
> > >
> /Users/kongf/projects/moose6/petsc1/src/vec/is/sf/impls/basic/neighbor/sfneighbor.c:240:18:
> > > > > > > error: no matching function for call to
> 'MPI_Neighbor_alltoallv_c'
> > > > > > >
> > > > > > >
> > > > >
> > >
> PetscCallMPI(MPIU_Neighbor_alltoallv(rootbuf,dat->rootcounts,dat->rootdispls,unit,leafbuf,dat->leafcounts,dat->leafdispls,unit,comm));
> > > > > > >
> > > > > > >
> > > > >
> > >
> ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> > > > > > >
> > > > >
> > >
> /Users/kongf/projects/moose6/petsc1/include/petsc/private/mpiutils.h:96:79:
> > > > > > > note: expanded from macro 'MPIU_Neighbor_alltoallv'
> > > > > > >   #define MPIU_Neighbor_alltoallv(a,b,c,d,e,f,g,h,i)
> > > > > > >    MPI_Neighbor_alltoallv_c(a,b,c,d,e,f,g,h,i)
> > > > > > >
> > > > > > >     ^~~~~~~~~~~~~~~~~~~~~~~~
> > > > > > >
> /Users/kongf/projects/moose6/petsc1/include/petscerror.h:407:32:
> > > note:
> > > > > > > expanded from macro 'PetscCallMPI'
> > > > > > >     PetscMPIInt _7_errorcode = __VA_ARGS__;
> > > > > > >                      \
> > > > > > >                                ^~~~~~~~~~~
> > > > > > > /Users/kongf/mambaforge3/envs/moose/include/mpi_proto.h:1001:5:
> > > note:
> > > > > > > candidate function not viable: no known conversion from
> > > 'PetscSFCount
> > > > > *'
> > > > > > > (aka 'long long *') to 'const MPI_Count *' (aka 'const long *')
> > > for 2nd
> > > > > > > argument
> > > > > > > int MPI_Neighbor_alltoallv_c(const void *sendbuf, const
> MPI_Count
> > > > > > > sendcounts[],
> > > > > > >     ^
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > > >
> > > >
> > >
> > >
> >
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20220329/6f810380/attachment.html>


More information about the petsc-dev mailing list