[petsc-dev] asm / gasm

Mark Adams mfadams at lbl.gov
Mon Jun 27 00:35:01 CDT 2016


Garth: we have something for you to try: branch barry/fix-gamg-asm-aggs

add: -pc_gamg_use_agg_asm -mg_levels_sub_pc_type lu


> >
> > I added some code to add a block on each processor for any singletons,
> because the MIS code strips these (so, yes, not a true MIS).  I should do
> this for users that put a live equation in a singleton, like a non-homo
> Diri BC.  I can add that you your branch.  Let me know if that is OK.
>
>    I do not understand this. Do you mean a variable that is only coupled
> to itself? So the row and column for the variable have only an entry on the
> diagonal?


Yes, a BC that has not been removed from the matrix.


> Or only the rows have an entry on the diagonal? What do you mean "strips
> it out"? Do you mean it appears in NO aggregate with MIS?
>
>
Yes, because I don't want these as coarse grid variables. They do not need
a coarse grid correction. An app can have millions of these and I don't
want them around (eg, they are low rank when you have more null space
vectors than block size)




>   Rather than "adding a block on each processor for singletons won't it be
> better that MIS doesn't "strip these out" but instead puts them each in
> their own little (size 1) aggregate? Then they will automatically get their
> own blocks?
>

I would then have to strip them out again for the prolongator and having a
full blown ASM block for every BC vertex would be a mess.  I just prefer to
strip them out.


> >
> >
> >    In addition Fande is adding error checking to PCGASM so if you pass
> it badly formatted subdomain information (like was passed from GAMG) it
> will generate a very useful error message instead of just chugging along
> with gibberish.
> >
> >    Barry
> >
> >
> > Mark, my confusion came from that fact that  a single MPI process owns
> each of the aggs; that is the list of degrees of freedom for each agg is
> all on one process.
> >
> > NO, NO, NO
> >
> > This is exactly what PCASM needs but NOT what PCGASM needs.
> >
> >
> > My aggregates span processor subdomains.
> >
> > The MIS aggregator is such (simple greedy) that an aggregate assigned to
> a process can span only one layer of vertices into a neighbor. (The HEM
> coarsener is more sophisticated and can deal with Jed's canonical thin
> wire, for instance, and can span forever, sort of.)
> >
> > So the code now is giving you aggregates that span processors (ie, not
> local indices).  I am puzzled that this works.  Am I misunderstanding you?
> You are very clear here. Puzzled.
>
>   I mean that ALL the indices for any single aggregate are ALL stored on
> the same process; in the same aggregate list! I don't mean that the indices
> for an aggregate can only be for variables that live on that process.  I
> think we are in understanding here, just bad communication.
>
>
OK, good.

I will add the fix for BCs and add this to ksp/ex56 and test it.

Thanks,
Mark



>
>
> >
> > I can change this, if ASM can not deal with this. I will just drop
> off-processor indices and add non-covered indices to my new singleton
> aggregate.
> >
> > Mark
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20160627/8d047c2a/attachment.html>


More information about the petsc-dev mailing list