[petsc-dev] parallel direct solvers for MG

Mark Adams mfadams at lbl.gov
Tue Jun 27 10:47:58 CDT 2017


On Tue, Jun 27, 2017 at 11:41 AM, Hong <hzhang at mcs.anl.gov> wrote:

> For '-pc_type mg', we use '-mg_levels_0_pc_type redundant -mg_coarse_pc_redundant_number
> <n>' to control num of processors for each coarse-grid solve, n<=np and
> n=np is the default which solves coarse-grid sequentially.
>
> See petsc/src/snes/examples/tutorials/runex48_4, for which
> you can add option '-mg_coarse_pc_redundant_number 2' to have two
> subcommunicators at coarse-grid level.
>
> Can you use pcredundant for gamg?
>

Humm, maybe. I don't know the number of processor before the run, But I
could manually set the flag or have a functional interface to set this,
does that exist?

Since this is for a redundant solve I assume the coarse grid has to be an
factor in the number of global processors. Is that right? I don't do that
now but I suppose I could change my logic. Not a big deal. (I could also
look at Telescope parameters in the process and try to align the two)


>
> Hong
>
> On Tue, Jun 27, 2017 at 9:46 AM, Mark Adams <mfadams at lbl.gov> wrote:
>
>>
>>
>> On Tue, Jun 27, 2017 at 8:35 AM, Matthew Knepley <knepley at gmail.com>
>> wrote:
>>
>>> On Tue, Jun 27, 2017 at 6:36 AM, Mark Adams <mfadams at lbl.gov> wrote:
>>>
>>>> In talking with Garth, this will not work.
>>>>
>>>> I/we am now thinking that we should replace the MG object with
>>>> Telescope. Telescope seems to be designed to be a superset of MG. Telescope
>>>> does the processor reduction, and GAMG does as well, so we would have to
>>>> reconcile this.  Does this sound like a good idea? Am I missing anything
>>>> important?
>>>>
>>>
>>> I don't think "replace" is the right word. Telescope only does process
>>> reduction. It does not do control flow for solvers,
>>> or restriction/prolongation. You can see telescope interacting with MG
>>> here
>>>
>>
>> Oh it is not the answer at all!
>>
>>
>>>
>>>   https://arxiv.org/abs/1604.07163
>>>
>>> I think more of this should be "default", in that the options are turned
>>> on if you are running GMG on a large number of procs.
>>>
>>> I also think GAMG should reuse the telescope code for doing reduction,
>>> but I am not sure how hard this is. Mark?
>>>
>>
>> There is a little logic in there for selecting the number of processors
>> on the coarse grid. I think the way we integrate this is to integrate
>> parameters, if we want.  I take a hint on the number of equations to try to
>> keep on a process (nnz would be better). I don't take hints on cluster
>> size, I don't reduce the number of processors by an integer. I could change
>> these to be more in line with Telescope, but that does not solve our
>> problem.
>>
>> Does PETSc now support matrices with an LHS and RHS communicator? I think
>> it does. I could just make sub communicators for each level in the GAMG
>> setup. I run PtAP and the see what I get and reduce the number of
>> processors (using something like MatGetSubMtrix as I recall, to aggregate
>> the matrix), and repartition if desired (one should).
>>
>> Would it make sense for me to do this in GAMG, and see if it breaks
>> anything in MG?
>>
>>
>>>
>>>   Thanks,
>>>
>>>     Matt
>>>
>>>
>>>> Mark
>>>>
>>>> On Tue, Jun 27, 2017 at 4:48 AM, Mark Adams <mfadams at lbl.gov> wrote:
>>>>
>>>>> Parallel coarse grid solvers are a bit broken at large scale where you
>>>>> don't want to use all processors on the coarse grid. The ideal thing might
>>>>> be to create a sub communicator, but it's not clear how to integrate this
>>>>> in (eg, check if the sub communicator exists before calling the coarse grid
>>>>> solver and convert if necessary). A bit messy. It would be nice if a
>>>>> parallel direct solver would not redistribute the matrix, but then it would
>>>>> be asking too much for it to reorder also, so we could have a crappy
>>>>> ordering. So maybe the first option would be best long term.
>>>>>
>>>>> I see we have MUMPS and PaStiX. Do either of these not redistribute
>>>>> if asked?
>>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>> What most experimenters take for granted before they begin their
>>> experiments is infinitely more interesting than any results to which their
>>> experiments lead.
>>> -- Norbert Wiener
>>>
>>> http://www.caam.rice.edu/~mk51/
>>>
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20170627/1122e6d7/attachment.html>


More information about the petsc-dev mailing list