[petsc-dev] asm / gasm

Mark Adams mfadams at lbl.gov
Wed Jun 22 17:23:19 CDT 2016


On Wed, Jun 22, 2016 at 8:14 PM, Boyce Griffith <griffith at cims.nyu.edu>
wrote:

>
> On Jun 22, 2016, at 2:06 PM, Barry Smith <bsmith at mcs.anl.gov> wrote:
>
>
>   I suggest focusing on asm. Having blocks that span multiple processes
> seems like over kill for a smoother ? (Major league overkill) in fact
> doesn't one want multiple blocks per process, ie. pretty small blocks.
>
>
> And with lots of small blocks, remember to configure
> with --with-viewfromoptions=0. :-)
>

Yikes.  That is overkill right, unless you are worried about (users)
accidentally using view and crashing the run with output.

I guess we should have a flag or not iterate over the blocks in ASMView ...


>
> -- Boyce
>
>
>   Barry
>
> On Jun 22, 2016, at 7:51 AM, Mark Adams <mfadams at lbl.gov> wrote:
>
> I'm trying to get block smoothers to work for gamg.  We (Garth) tried this
> and got this error:
>
>
> - Another option is use '-pc_gamg_use_agg_gasm true' and use
> '-mg_levels_pc_type gasm'.
>
>
> Running in parallel, I get
>
>     ** Max-trans not allowed because matrix is distributed
> ----
>
> First, what is the difference between asm and gasm?
>
> Second, I need to fix this to get block smoothers. This used to work.  Did
> we lose the capability to have blocks that span processor subdomains?
>
> gamg only aggregates across processor subdomains within one layer, so
> maybe I could use one layer of overlap in some way?
>
> Thanks,
> Mark
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20160623/135cd585/attachment.html>


More information about the petsc-dev mailing list