[petsc-dev] asm / gasm
Barry Smith
bsmith at mcs.anl.gov
Sun Jun 26 21:50:34 CDT 2016
> On Jun 26, 2016, at 9:28 PM, Mark Adams <mfadams at lbl.gov> wrote:
>
> Thanks Barry, ... we are still not communicating (see below).
>
> On Sun, Jun 26, 2016 at 11:56 PM, Barry Smith <bsmith at mcs.anl.gov> wrote:
>
> I have changed the GAMG aggs support to use PCASM and not PCGASM. I don't see how it ever worked with PCGASM; very strange. Maybe it was written and tested with PCASM and then later someone changed to PCGASM and did not test it again.
>
> I really thought it was ASM but memory can be strange sometimes ...
>
>
> I was wrong when I said it would be difficult to change to PCASM, it was much simpler to change to PCASM then I thought; this is because I assumed it had been done correctly for PCGASM while in fact though it used PCGASM in the code it followed the model for what PCASM needed :-(.
>
> Anyways I have a branch barry/fix-gamg-asm-aggs that now works for my ex10 tests I will continue to clean up the branch, fixing naming styles and test with more difficult GAMG examples. And add proper nightly tests for this functionality which it never had before.
>
> Great thanks.
>
> Which ex10 is it? and how to I run your tests?
src/ksp/ksp/examples/tutorials
petscmpiexec -valgrind -n 2 ./ex10 -f0 ~/Datafiles/matrices/poisson2 -ksp_type cg -ksp_monitor_short -ksp_rtol 1.e-8 -pc_type gamg -pc_gamg_type agg -pc_gamg_agg_nsmooths 1 -pc_gamg_coarse_eq_limit 100 -pc_gamg_reuse_interpolation true -pc_gamg_square_graph 1 -pc_gamg_threshold 0.0 -ksp_converged_reason -use_mat_nearnullspace true -mg_levels_ksp_max_it 2 -mg_levels_ksp_type chebyshev -mg_levels_esteig_ksp_type cg -mg_levels_esteig_ksp_max_it 10 -mg_levels_ksp_chebyshev_esteig 0,0.05,0,1.05 -mg_levels_pc_type sor -mat_block_size 1 -pc_gamg_use_agg_asm -mg_levels_pc_type asm
the data files come from ftp.mcs.anl.gov in the directory pub/petsc/datafiles/matrices
>
> I added some code to add a block on each processor for any singletons, because the MIS code strips these (so, yes, not a true MIS). I should do this for users that put a live equation in a singleton, like a non-homo Diri BC. I can add that you your branch. Let me know if that is OK.
I do not understand this. Do you mean a variable that is only coupled to itself? So the row and column for the variable have only an entry on the diagonal? Or only the rows have an entry on the diagonal? What do you mean "strips it out"? Do you mean it appears in NO aggregate with MIS?
Rather than "adding a block on each processor for singletons won't it be better that MIS doesn't "strip these out" but instead puts them each in their own little (size 1) aggregate? Then they will automatically get their own blocks?
>
>
> In addition Fande is adding error checking to PCGASM so if you pass it badly formatted subdomain information (like was passed from GAMG) it will generate a very useful error message instead of just chugging along with gibberish.
>
> Barry
>
>
> Mark, my confusion came from that fact that a single MPI process owns each of the aggs; that is the list of degrees of freedom for each agg is all on one process.
>
> NO, NO, NO
>
> This is exactly what PCASM needs but NOT what PCGASM needs.
>
>
> My aggregates span processor subdomains.
>
> The MIS aggregator is such (simple greedy) that an aggregate assigned to a process can span only one layer of vertices into a neighbor. (The HEM coarsener is more sophisticated and can deal with Jed's canonical thin wire, for instance, and can span forever, sort of.)
>
> So the code now is giving you aggregates that span processors (ie, not local indices). I am puzzled that this works. Am I misunderstanding you? You are very clear here. Puzzled.
I mean that ALL the indices for any single aggregate are ALL stored on the same process; in the same aggregate list! I don't mean that the indices for an aggregate can only be for variables that live on that process. I think we are in understanding here, just bad communication.
>
> I can change this, if ASM can not deal with this. I will just drop off-processor indices and add non-covered indices to my new singleton aggregate.
>
> Mark
More information about the petsc-dev
mailing list