[petsc-dev] asm / gasm
Boyce Griffith
griffith at cims.nyu.edu
Wed Jun 22 19:42:59 CDT 2016
> On Jun 22, 2016, at 7:34 PM, Boyce Griffith <griffith at cims.nyu.edu> wrote:
>
>
>
> On Jun 22, 2016, at 6:23 PM, Mark Adams <mfadams at lbl.gov <mailto:mfadams at lbl.gov>> wrote:
>
>>
>>
>> On Wed, Jun 22, 2016 at 8:14 PM, Boyce Griffith <griffith at cims.nyu.edu <mailto:griffith at cims.nyu.edu>> wrote:
>>
>>> On Jun 22, 2016, at 2:06 PM, Barry Smith <bsmith at mcs.anl.gov <mailto:bsmith at mcs.anl.gov>> wrote:
>>>
>>>
>>> I suggest focusing on asm. Having blocks that span multiple processes seems like over kill for a smoother ? (Major league overkill) in fact doesn't one want multiple blocks per process, ie. pretty small blocks.
>>
>> And with lots of small blocks, remember to configure with --with-viewfromoptions=0. :-)
>>
>> Yikes. That is overkill right, unless you are worried about (users) accidentally using view and crashing the run with output.
>
> No joke, Amneet found that PCASM with lots of small subdomains was spending a ton of time in view calls.
(And, of course, Barry fixed it by adding this configure flag --- thanks Barry!)
Note that it does not matter whether you use view or not.
-- Boyce
>
> -- Boyce
>
>> I guess we should have a flag or not iterate over the blocks in ASMView ...
>>
>>
>> -- Boyce
>>
>>>
>>> Barry
>>>
>>>> On Jun 22, 2016, at 7:51 AM, Mark Adams <mfadams at lbl.gov <mailto:mfadams at lbl.gov>> wrote:
>>>>
>>>> I'm trying to get block smoothers to work for gamg. We (Garth) tried this and got this error:
>>>>
>>>>
>>>> - Another option is use '-pc_gamg_use_agg_gasm true' and use '-mg_levels_pc_type gasm'.
>>>>
>>>>
>>>> Running in parallel, I get
>>>>
>>>> ** Max-trans not allowed because matrix is distributed
>>>> ----
>>>>
>>>> First, what is the difference between asm and gasm?
>>>>
>>>> Second, I need to fix this to get block smoothers. This used to work. Did we lose the capability to have blocks that span processor subdomains?
>>>>
>>>> gamg only aggregates across processor subdomains within one layer, so maybe I could use one layer of overlap in some way?
>>>>
>>>> Thanks,
>>>> Mark
>>>>
>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20160622/8d71d7c1/attachment.html>
More information about the petsc-dev
mailing list