[petsc-dev] PETSc - MPI3 functionality

Jeff Hammond jeff.science at gmail.com
Mon Sep 10 13:55:53 CDT 2018


On Sun, Sep 9, 2018 at 6:09 AM, Jed Brown <jed at jedbrown.org> wrote:

> Tamara Dancheva <tamaradanceva19933 at gmail.com> writes:
>
> > Hi Barry,
> >
> > I see the issue..
> >
> >  In the FEM library and solver that I am working on, PETSc is used all
> > throughout for the data distribution, synchronization of functions,
> > assembly. There is another UPC alternation of using the JANPACK linear
> > algebra backend (http://www.csc.kth.se/~njansson/janpack/), which gives
> > increased performance.
>
> How do you know the JANPACK performance is better?  The figures on that
> website appeared in a paper submission that was ultimately rejected
> after it was discovered that the convergence criteria actually differed
> by orders of magnitude and the reference PETSc results were uniformly
> faster.  The most recent release appears to have been in 2015.
>

Note also that UPC is perhaps the least performance-portable thing on which
one can build a distributed-memory HPC library.  Cray XE6 used the Gemini
interconnect, which is a PGAS NIC that was designed to run UPC and related
models.  Naturally, the Cray UPC compiler is designed to max out
performance on Cray's PGAS NICs.  There are very few other platforms where
the UPC user experience will approach this.

In contrast, MPI send-recv runs well (relative to the hardware) on
everything from token-ring Ethernet to the most expensive supercomputer you
can buy.

Thus, even if one assumes that the performance comparisons are completely
fair, they are an outlier and the relative performance on most other
machines will be nowhere near as favorable to JANPACK.  The JANPACK
developers need to publish comparisons on multiple commodity networks as
well.

Jeff


> > My project is about exploring another pathway, optimization given that
> > this software targets large scale computations, an asynchronous
> > version of the algorithm for which I have implemented a Block-Jacobi
> > with inner Krylov Solvers (inner solve with PETSc). This version aims
> > for a speedup factor of about 1.7-2.0 (from some literature although
> > not in the same context exactly)
>
> Could you share What literature you are basing this estimate on?  It's
> important to make comparisons using a performance model.  For example,
> if current PETSc results attain 70% of STREAM bandwidth, then no amount
> of latency/communication optimization will yield your desired
> improvement factors.  On the other hand, if your solver is latency
> dominated due to pushing to the limit of strong scalability, then these
> optimizations might be possible (with many caveats).
>
> If you could send -log_view output for your application, it would help
> us understand the performance setting of your current solver
> configuration.
>
> > and it is done with the same motivation behind ExaFLOW
> > (http://exaflow-project.eu/), I would say. This still requires me to
> > modify the ghost exchange routines in order to be able to advance the
> > processes out of sync. I could implement this out of PETSc, but I
> > would significantly increase the memory footprint, since the necessary
> > data is currently fed to PETSc and discarded. In this context, since
> > PETSc also works with, stores MPI requests, I can reuse and extend
> > upon the implementation since this is close to the approach I have in
> > mind (using either a circular of limited size buffer of MPI Requests
> > and non-blocking collectives). I had also considering not using PETSc
> > at all to avoid all the blocking regions, however considering the
> > scope of my project, deemed that it would take too long to implement
> > and validate.
>
> It's very reasonable to implement in PETSc, but let's discuss the
> communication pattern first.  You said you are working with a FEM model,
> but also mention "igatherv".  Is this for some sequential mesh
> processing task or is it related to the solver?  There isn't a
> neighborhood igatherv and MPI_Igatherv isn't a pattern that should ever
> be needed in a FEM solver.
>



-- 
Jeff Hammond
jeff.science at gmail.com
http://jeffhammond.github.io/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20180910/6d342a34/attachment.html>


More information about the petsc-dev mailing list