[petsc-dev] p4est w/o MPI

Satish Balay balay at mcs.anl.gov
Fri Apr 16 10:50:26 CDT 2021


Ok. MPICH/OpenMPI could be used for sequential runs and parallel within node.

[and not between nodes?]

Satish

On Fri, 16 Apr 2021, Mark Adams wrote:

> On Fri, Apr 16, 2021 at 11:33 AM Satish Balay <balay at mcs.anl.gov> wrote:
> 
> > Mark, Why a no-mpi build on Fugaku?
> >
> 
> Kokkos Kernels + OMP barfed and Kokkos suggested gcc. There does not seem
> to be an MPI for GCC here.
> 
> 
> >
> > Toby - there is one more issue with p4est - build error when using
> > --with-batch=1
> >
> > Attaching configure.log
> >
> > Satish
> >
> > On Fri, 16 Apr 2021, Isaac, Tobin G wrote:
> >
> > > p4est has a mode where it can compile without MPI, I don't know if PETSc
> > is using it, will check.
> > >
> > > ________________________________________
> > > From: petsc-dev <petsc-dev-bounces at mcs.anl.gov> on behalf of Mark Adams
> > <mfadams at lbl.gov>
> > > Sent: Friday, April 16, 2021 10:23
> > > To: For users of the development version of PETSc
> > > Subject: [petsc-dev] p4est w/o MPI
> > >
> > > I don't have MPI (Fugaku w/ gcc) and p4est seems to need it. Is there a
> > work around?
> > > Thanks,
> > > Mark
> > >
> >
> 



More information about the petsc-dev mailing list