[petsc-dev] Bad use of defined(MPI_XXX)
Jed Brown
jed at jedbrown.org
Fri May 24 23:08:20 CDT 2019
To be fair, Microsoft just doesn't have resources left over for MPI
after all they've been putting into making sure nobody in China learns
about Tiananmen Square.
"Smith, Barry F. via petsc-dev" <petsc-dev at mcs.anl.gov> writes:
> MS-MPI has in their include file
>
> #define MPI_VERSION 2
> #define MPI_SUBVERSION 0
>
> Their last release was Oct 2018 https://docs.microsoft.com/en-us/message-passing-interface/microsoft-mpi-release-notes
>
> They also don't provide a mpi.mod file though they do provide mpif.h and Fortran stubs in their library see the discussion at
> https://bitbucket.org/petsc/petsc/pull-requests/1666/error-and-stop-configure-if-the-mpi-module/diff
>
> As a side note, Microsoft's C compiler also does not support all of C99 :-)
>
>
>
>> On May 24, 2019, at 5:23 PM, Jeff Hammond via petsc-dev <petsc-dev at mcs.anl.gov> wrote:
>>
>> No, it's really not better to keep it. MPI 2.2 support is ubiquitous. It has been 10 years, which is 1-2 lifetimes of an HPC system or PC. Anybody who insists on using an MPI library that doesn't support 2.2 should accept that they must use a version of PETSc from 2018 or earlier.
>>
>> In the HPC space, MPI 3.0 has been available on most machines for 5+ years. The last platform that I used that didn't have MPI 2.2 support was IBM Blue Gene/P and all of those machines were taken offline long ago. As of SC18, the MPI 3.1 support matrix (see below) is essentially complete and the only feature that PETSc would need to test for is MS-MPI's lack of neighborhood collectives.
>>
>> I am aware that people are using Open-MPI 1.10 in production today. These people are bad. Don't allow their poor life choices to force the pollution of PETSc source code with unnecessary macros.
>>
>> https://lists.mpi-forum.org/pipermail/mpi-forum/2014-June/006086.html <- MPI 3.0
>> https://lists.mpi-forum.org/pipermail/mpi-forum/2016-November/006532.html <- MPI 3.1
>> https://lists.mpi-forum.org/pipermail/mpi-forum/2018-November/006783.html <- MPI 3.1
>>
>> Jeff
>>
>> On Fri, May 24, 2019 at 2:15 PM Zhang, Junchao via petsc-dev <petsc-dev at mcs.anl.gov> wrote:
>> PetscSF has many PETSC_HAVE_MPI_REDUCE_LOCAL. It is disturbing. But consider the time gap between MPI-2.0 (1998) and MPI-2.2 (2009), it is better to keep it.
>>
>>
>> On Fri, May 24, 2019 at 3:53 PM Jed Brown <jed at jedbrown.org> wrote:
>> "Zhang, Junchao" <jczhang at mcs.anl.gov> writes:
>>
>> > How about stuff in MPI-2.2 (approved in 2009), the last of MPI-2.x, e.g., PETSC_HAVE_MPI_REDUCE_LOCAL?
>>
>> Currently we only require MPI-2.0, but I would not object to increasing
>> to MPI-2.1 or 2.2 if such systems are sufficiently rare (almost
>> nonexistent) in the wild. I'm not sure how great the benefits are.
>>
>> > On Fri, May 24, 2019 at 2:51 PM Jed Brown via petsc-dev <petsc-dev at mcs.anl.gov<mailto:petsc-dev at mcs.anl.gov>> wrote:
>> > Lisandro Dalcin via petsc-dev <petsc-dev at mcs.anl.gov<mailto:petsc-dev at mcs.anl.gov>> writes:
>> >
>> >> These two are definitely wrong, we need PETSC_HAVE_MPI_XXX instead.
>> >
>> > Thanks, we can delete both of these cpp guards.
>> >
>> >> include/petscsf.h:#if defined(MPI_REPLACE)
>> >
>> > MPI-2.0
>> >
>> >> src/sys/objects/init.c:#if defined(PETSC_USE_64BIT_INDICES) ||
>> >> !defined(MPI_2INT)
>> >
>> > MPI-1.0
>>
>>
>> --
>> Jeff Hammond
>> jeff.science at gmail.com
>> http://jeffhammond.github.io/
More information about the petsc-dev
mailing list