[petsc-dev] PETSc - MPI3 functionality

Tamara Dancheva tamaradanceva19933 at gmail.com
Sat Sep 8 03:42:44 CDT 2018


Hi Barry,

I see the issue..

 In the FEM library and solver that I am working on, PETSc is used all
throughout for the data distribution, synchronization of functions,
assembly. There is another UPC alternation of using the JANPACK linear
algebra backend (http://www.csc.kth.se/~njansson/janpack/), which gives
increased performance. My project is about exploring another pathway,
optimization given that this software targets large scale computations, an
asynchronous version of the algorithm for which I have implemented a
Block-Jacobi with inner Krylov Solvers (inner solve with PETSc). This
version aims for a speedup factor of about 1.7-2.0 (from some literature
although not in the same context exactly) and it is done with the same
motivation behind ExaFLOW (http://exaflow-project.eu/), I would say. This
still requires me to modify the ghost exchange routines in order to be able
to advance the processes out of sync. I could implement this out of PETSc,
but I would significantly increase the memory footprint, since the
necessary data is currently fed to PETSc and discarded. In this context,
since PETSc also works with, stores MPI requests, I can reuse and extend
upon the implementation since this is close to the approach I have in mind
(using either a circular of limited size buffer of MPI Requests and
non-blocking collectives). I had also considering not using PETSc at all to
avoid all the blocking regions, however considering the scope of my
project, deemed that it would take too long to implement and validate.

Hope this sums it up well,
Tamara


On Sat, Sep 8, 2018 at 4:28 AM Smith, Barry F. <bsmith at mcs.anl.gov> wrote:

>
>
>     Tamara,
>
>        The VecScatter routines are in a big state of flux now as we try to
> move from a monolithic implementation (where many cases were handled with
> cumbersome if checks in the code) to simpler independent standalone
> implementations that easily allow new implementations orthogonal to the
> current implementations. So it is not a good time to dive in.
>
>     We are trying to do the refactorization but it is a bit frustrating
> and slow.
>
>      Can you tell us why you feel you need a custom implementation? Is the
> current implementation too slow (how do you know it is too slow?)?
>
>     Barry
>
> > On Sep 7, 2018, at 12:26 PM, Tamara Dancheva <
> tamaradanceva19933 at gmail.com> wrote:
> >
> > Hi,
> >
> > I am developing an asynchronous method for a FEM solver, and need a
> custom implementation of the VecScatterBegin and VecScatterEnd routines.
> Since PETSc uses its own limited set of MPI functions, could you tell what
> would be the best way possible to extend upon it and use for example the
> non-blocking collectives, igatherv and so on?
> >
> > I hope the question is specific enough, let me know if anything, I can
> provide with more information. I would very much appreciate any help,
> thanks in advance!
> >
> > Best,
> > Tamara
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20180908/da1f70c3/attachment-0001.html>


More information about the petsc-dev mailing list