[petsc-dev] parallel direct solvers for MG

Mark Adams mfadams at lbl.gov
Tue Jun 27 06:36:10 CDT 2017


In talking with Garth, this will not work.

I/we am now thinking that we should replace the MG object with Telescope.
Telescope seems to be designed to be a superset of MG. Telescope does the
processor reduction, and GAMG does as well, so we would have to reconcile
this.  Does this sound like a good idea? Am I missing anything important?

Mark

On Tue, Jun 27, 2017 at 4:48 AM, Mark Adams <mfadams at lbl.gov> wrote:

> Parallel coarse grid solvers are a bit broken at large scale where you
> don't want to use all processors on the coarse grid. The ideal thing might
> be to create a sub communicator, but it's not clear how to integrate this
> in (eg, check if the sub communicator exists before calling the coarse grid
> solver and convert if necessary). A bit messy. It would be nice if a
> parallel direct solver would not redistribute the matrix, but then it would
> be asking too much for it to reorder also, so we could have a crappy
> ordering. So maybe the first option would be best long term.
>
> I see we have MUMPS and PaStiX. Do either of these not redistribute if
> asked?
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20170627/dd364448/attachment.html>


More information about the petsc-dev mailing list