[petsc-dev] Question about MPICH device we use

Satish Balay balay at mcs.anl.gov
Thu Jul 23 23:56:06 CDT 2020


Should also note: the test suite is also run by users - not just CI.

Only yesterday I suggested Oana to try nemesis for a different issues [on WSL] - and the response was 'test suite is slow' so reverted back to sock [and tried a different workaround for that issue]

Satish

On Thu, 23 Jul 2020, Satish Balay via petsc-dev wrote:

> The primary reason is for users - developing on laptops/desktop and doing development runs in oversubscribed mode.
> 
> The choice was few percent loss in performance for sock vs exponential cost for oversubscribed usage of nemesis [so we defaulted to sock].
> 
> I think we should preserve this behavior for at-least the debug builds. [i.e switch only optimized builds to nemesis]
> 
> In CI we do pay this extra cost for some of the builds [that explicitly test with wemesis, openmpi etc..]
> 
> Satish
> 
> On Thu, 23 Jul 2020, Jed Brown wrote:
> 
> > I think we should default to ch3:nemesis when --download-mpich, and only do ch3:sock when requested (which we would do in CI).
> > 
> > Satish Balay via petsc-dev <petsc-dev at mcs.anl.gov> writes:
> > 
> > > Primarily because ch3:sock performance does not degrade in oversubscribe mode - which is developer friendly - i.e on your laptop.
> > >
> > > And folks doing optimized runs should use a properly tuned MPI for their setup anyway.
> > >
> > > In this case --download-mpich-device=ch3:nemesis is likely appropriate if using --download-mpich [and not using a separate/optimized MPI]
> > >
> > > Having defaults that satisfy all use cases is not practical.
> > >
> > > Satish
> > >
> > > On Wed, 22 Jul 2020, Matthew Knepley wrote:
> > >
> > >> We default to ch3:sock. Scott MacLachlan just had a long thread on the
> > >> Firedrake list where it ended up that reconfiguring using ch3:nemesis had a
> > >> 2x performance boost on his 16-core proc, and noticeable effect on the 4
> > >> core speedup.
> > >> 
> > >> Why do we default to sock?
> > >> 
> > >>   Thanks,
> > >> 
> > >>      Matt
> > >> 
> > >> 
> > 
> 



More information about the petsc-dev mailing list