[petsc-dev] asm / gasm

Mark Adams mfadams at lbl.gov
Wed Jun 22 17:20:11 CDT 2016


On Wed, Jun 22, 2016 at 8:06 PM, Barry Smith <bsmith at mcs.anl.gov> wrote:

>
>    I suggest focusing on asm.


OK, I will switch gasm to asm, this does not work anyway.


> Having blocks that span multiple processes seems like over kill for a
> smoother ?


No, because it is a pain to have the math convolved with the parallel
decompositions strategy (ie, I can't tell an application how to partition
their problem). If an aggregate spans processor boundaries, which is fine
and needed, and let's say we have a pretty uniform problem, then if the
block gets split up, H is small in part of the domain and convergence could
suffer along processor boundaries.  And having the math change as the
parallel decomposition changes is annoying.


> (Major league overkill) in fact doesn't one want multiple blocks per
> process, ie. pretty small blocks.
>

No, it is just doing what would be done in serial.  If the cost of moving
the data across the processor is a problem then that is a tradeoff to
consider.

And I think you are misunderstanding me.  There are lots of blocks per
process (the aggregates are say 3^D in size).  And many of the
aggregates/blocks along the processor boundary will be split between
processors, resulting is mall blocks and weak ASM PC on processor
boundaries.

I can understand ASM not being general and not letting blocks span
processor boundaries, but I don't think the extra matrix communication
costs are a big deal (done just once) and the vector communication costs
are not bad, it probably does not include (too many) new processors to
communicate with.


>    Barry
>
> > On Jun 22, 2016, at 7:51 AM, Mark Adams <mfadams at lbl.gov> wrote:
> >
> > I'm trying to get block smoothers to work for gamg.  We (Garth) tried
> this and got this error:
> >
> >
> >  - Another option is use '-pc_gamg_use_agg_gasm true' and use
> '-mg_levels_pc_type gasm'.
> >
> >
> > Running in parallel, I get
> >
> >      ** Max-trans not allowed because matrix is distributed
> >  ----
> >
> > First, what is the difference between asm and gasm?
> >
> > Second, I need to fix this to get block smoothers. This used to work.
> Did we lose the capability to have blocks that span processor subdomains?
> >
> > gamg only aggregates across processor subdomains within one layer, so
> maybe I could use one layer of overlap in some way?
> >
> > Thanks,
> > Mark
> >
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20160623/57df9a0c/attachment.html>


More information about the petsc-dev mailing list