<div dir="ltr"><div dir="ltr"><br><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, Mar 29, 2022 at 4:59 PM Satish Balay via petsc-dev <<a href="mailto:petsc-dev@mcs.anl.gov">petsc-dev@mcs.anl.gov</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">We do have such builds in CI - don't know why CI didn't catch it.<br>
<br>
$ grep with-64-bit-indices=1 *.py<br>
arch-ci-freebsd-cxx-cmplx-64idx-dbg.py: '--with-64-bit-indices=1',<br>
arch-ci-linux-cuda-double-64idx.py: '--with-64-bit-indices=1',<br>
arch-ci-linux-cxx-cmplx-pkgs-64idx.py: '--with-64-bit-indices=1',<br>
arch-ci-linux-pkgs-64idx.py: '--with-64-bit-indices=1',<br>
arch-ci-opensolaris-misc.py: '--with-64-bit-indices=1',<br>
<br></blockquote><div>It implies these CI jobs do not have a recent MPI (like MPICH-4.x ) that supports MPI-4 large count? It looks we need to have one.</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
Satish<br>
<br>
On Tue, 29 Mar 2022, Fande Kong wrote:<br>
<br>
> OK, I attached the configure log here so that we have move information.<br>
> <br>
> I feel like we should do<br>
> <br>
> typedef MPI_Count PetscSFCount<br>
> <br>
> Do we have the target of 64-bit-indices with C++ in CI? I was<br>
> surprised that I am the only guy who saw this issue<br>
> <br>
> Thanks,<br>
> <br>
> Fande<br>
> <br>
> On Tue, Mar 29, 2022 at 2:50 PM Satish Balay <<a href="mailto:balay@mcs.anl.gov" target="_blank">balay@mcs.anl.gov</a>> wrote:<br>
> <br>
> > What MPI is this? How to reproduce?<br>
> ><br>
> > Perhaps its best if you can send the relevant logs.<br>
> ><br>
> > The likely trigger code in sfneighbor.c:<br>
> ><br>
> > >>>><br>
> > /* A convenience temporary type */<br>
> > #if defined(PETSC_HAVE_MPI_LARGE_COUNT) && defined(PETSC_USE_64BIT_INDICES)<br>
> > typedef PetscInt PetscSFCount;<br>
> > #else<br>
> > typedef PetscMPIInt PetscSFCount;<br>
> > #endif<br>
> ><br>
> > This change is at <a href="https://gitlab.com/petsc/petsc/-/commit/c87b50c4628" rel="noreferrer" target="_blank">https://gitlab.com/petsc/petsc/-/commit/c87b50c4628</a><br>
> ><br>
> > Hm - if MPI supported LARGE_COUNT - perhaps it also provides a type that<br>
> > should go with it which we could use - instead of PetscInt?<br>
> ><br>
> ><br>
> > Perhaps it should be: "typedef log PetscSFCount;"<br>
> ><br>
> > Satish<br>
> ><br>
> ><br>
> > On Tue, 29 Mar 2022, Fande Kong wrote:<br>
> ><br>
> > > It seems correct according to<br>
> > ><br>
> > > #define PETSC_SIZEOF_LONG 8<br>
> > ><br>
> > > #define PETSC_SIZEOF_LONG_LONG 8<br>
> > ><br>
> > ><br>
> > > Can not convert from "non-constant" to "constant"?<br>
> > ><br>
> > > Fande<br>
> > ><br>
> > > On Tue, Mar 29, 2022 at 2:22 PM Fande Kong <<a href="mailto:fdkong.jd@gmail.com" target="_blank">fdkong.jd@gmail.com</a>> wrote:<br>
> > ><br>
> > > > Hi All,<br>
> > > ><br>
> > > > When building PETSc with 64 bit indices, it seems that PetscSFCount is<br>
> > > > 64-bit integer while MPI_Count is still 32 bit.<br>
> > > ><br>
> > > > typedef long MPI_Count;<br>
> > > ><br>
> > > > typedef PetscInt PetscSFCount;<br>
> > > ><br>
> > > ><br>
> > > > I had the following errors. Do I have a bad MPI?<br>
> > > ><br>
> > > > Thanks,<br>
> > > ><br>
> > > > Fande<br>
> > > ><br>
> > > ><br>
> > > ><br>
> > Users/kongf/projects/moose6/petsc1/src/vec/is/sf/impls/basic/neighbor/sfneighbor.c:171:18:<br>
> > > > error: no matching function for call to 'MPI_Ineighbor_alltoallv_c'<br>
> > > ><br>
> > > ><br>
> > PetscCallMPI(MPIU_Ineighbor_alltoallv(rootbuf,dat->rootcounts,dat->rootdispls,unit,leafbuf,dat->leafcounts,dat->leafdispls,unit,distcomm,req));<br>
> > > ><br>
> > > ><br>
> > ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~<br>
> > > ><br>
> > /Users/kongf/projects/moose6/petsc1/include/petsc/private/mpiutils.h:97:79:<br>
> > > > note: expanded from macro 'MPIU_Ineighbor_alltoallv'<br>
> > > > #define MPIU_Ineighbor_alltoallv(a,b,c,d,e,f,g,h,i,j)<br>
> > > > MPI_Ineighbor_alltoallv_c(a,b,c,d,e,f,g,h,i,j)<br>
> > > ><br>
> > > > ^~~~~~~~~~~~~~~~~~~~~~~~~<br>
> > > > /Users/kongf/projects/moose6/petsc1/include/petscerror.h:407:32: note:<br>
> > > > expanded from macro 'PetscCallMPI'<br>
> > > > PetscMPIInt _7_errorcode = __VA_ARGS__;<br>
> > > > \<br>
> > > > ^~~~~~~~~~~<br>
> > > > /Users/kongf/mambaforge3/envs/moose/include/mpi_proto.h:945:5: note:<br>
> > > > candidate function not viable: no known conversion from 'PetscSFCount<br>
> > *'<br>
> > > > (aka 'long long *') to 'const MPI_Count *' (aka 'const long *') for 2nd<br>
> > > > argument<br>
> > > > int MPI_Ineighbor_alltoallv_c(const void *sendbuf, const MPI_Count<br>
> > > > sendcounts[],<br>
> > > > ^<br>
> > > ><br>
> > /Users/kongf/projects/moose6/petsc1/src/vec/is/sf/impls/basic/neighbor/sfneighbor.c:195:18:<br>
> > > > error: no matching function for call to 'MPI_Ineighbor_alltoallv_c'<br>
> > > ><br>
> > > ><br>
> > PetscCallMPI(MPIU_Ineighbor_alltoallv(leafbuf,dat->leafcounts,dat->leafdispls,unit,rootbuf,dat->rootcounts,dat->rootdispls,unit,distcomm,req));<br>
> > > ><br>
> > > ><br>
> > ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~<br>
> > > ><br>
> > /Users/kongf/projects/moose6/petsc1/include/petsc/private/mpiutils.h:97:79:<br>
> > > > note: expanded from macro 'MPIU_Ineighbor_alltoallv'<br>
> > > > #define MPIU_Ineighbor_alltoallv(a,b,c,d,e,f,g,h,i,j)<br>
> > > > MPI_Ineighbor_alltoallv_c(a,b,c,d,e,f,g,h,i,j)<br>
> > > ><br>
> > > > ^~~~~~~~~~~~~~~~~~~~~~~~~<br>
> > > > /Users/kongf/projects/moose6/petsc1/include/petscerror.h:407:32: note:<br>
> > > > expanded from macro 'PetscCallMPI'<br>
> > > > PetscMPIInt _7_errorcode = __VA_ARGS__;<br>
> > > > \<br>
> > > > ^~~~~~~~~~~<br>
> > > > /Users/kongf/mambaforge3/envs/moose/include/mpi_proto.h:945:5: note:<br>
> > > > candidate function not viable: no known conversion from 'PetscSFCount<br>
> > *'<br>
> > > > (aka 'long long *') to 'const MPI_Count *' (aka 'const long *') for 2nd<br>
> > > > argument<br>
> > > > int MPI_Ineighbor_alltoallv_c(const void *sendbuf, const MPI_Count<br>
> > > > sendcounts[],<br>
> > > > ^<br>
> > > ><br>
> > /Users/kongf/projects/moose6/petsc1/src/vec/is/sf/impls/basic/neighbor/sfneighbor.c:240:18:<br>
> > > > error: no matching function for call to 'MPI_Neighbor_alltoallv_c'<br>
> > > ><br>
> > > ><br>
> > PetscCallMPI(MPIU_Neighbor_alltoallv(rootbuf,dat->rootcounts,dat->rootdispls,unit,leafbuf,dat->leafcounts,dat->leafdispls,unit,comm));<br>
> > > ><br>
> > > ><br>
> > ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~<br>
> > > ><br>
> > /Users/kongf/projects/moose6/petsc1/include/petsc/private/mpiutils.h:96:79:<br>
> > > > note: expanded from macro 'MPIU_Neighbor_alltoallv'<br>
> > > > #define MPIU_Neighbor_alltoallv(a,b,c,d,e,f,g,h,i)<br>
> > > > MPI_Neighbor_alltoallv_c(a,b,c,d,e,f,g,h,i)<br>
> > > ><br>
> > > > ^~~~~~~~~~~~~~~~~~~~~~~~<br>
> > > > /Users/kongf/projects/moose6/petsc1/include/petscerror.h:407:32: note:<br>
> > > > expanded from macro 'PetscCallMPI'<br>
> > > > PetscMPIInt _7_errorcode = __VA_ARGS__;<br>
> > > > \<br>
> > > > ^~~~~~~~~~~<br>
> > > > /Users/kongf/mambaforge3/envs/moose/include/mpi_proto.h:1001:5: note:<br>
> > > > candidate function not viable: no known conversion from 'PetscSFCount<br>
> > *'<br>
> > > > (aka 'long long *') to 'const MPI_Count *' (aka 'const long *') for 2nd<br>
> > > > argument<br>
> > > > int MPI_Neighbor_alltoallv_c(const void *sendbuf, const MPI_Count<br>
> > > > sendcounts[],<br>
> > > > ^<br>
> > > ><br>
> > > ><br>
> > > ><br>
> > ><br>
> ><br>
> ><br>
> <br>
<br>
</blockquote></div></div>