[petsc-dev] asm / gasm

Barry Smith bsmith at mcs.anl.gov
Fri Jun 24 20:48:10 CDT 2016


  Fande,

   Based on my bisection I got the results

 
There are only 'skip'ped commits left to test.
The first bad commit could be any of:
08902ef530872fd742dc4ae06e5f55514aa5a6f4
c14adfe1f4858fc4a331628e0999094c88508aa6
25623ae837111235baaf8310854f127ee4f5d849
f771a2744df1b1e04cbde7267f90af2b8dd270ea
bac5b06fd7acf2dbdffb3ef782ccfe7a95cc790d
da8d96df7d39c0e17cff0758e470753f324e2e80
4bb724134134efafd385bc6b729da2f75e5ded03
930d09c13c6be24c34efcb20ee858969724c0dd9
b9d0fdaa8b8fd04b16ee28e86ef9a83a5d6857d0
We cannot bisect more!
~/Src/petsc ((6a644ab...)|BISECTING) arch-basic

Before these commits 

  src/ksp/ksp/examples/tutorials/ex10 ran fine with 

petscmpiexec -valgrind -n 2 ./ex10 -f0 ~/Datafiles/matrices/poisson2 -ksp_type cg -ksp_monitor_short -ksp_rtol 1.e-8 -pc_type gamg -pc_gamg_type agg -pc_gamg_agg_nsmooths 1 -pc_gamg_coarse_eq_limit 100 -pc_gamg_reuse_interpolation true -pc_gamg_square_graph 1 -pc_gamg_threshold 0.0 -ksp_converged_reason -use_mat_nearnullspace true -mg_levels_ksp_max_it 2 -mg_levels_ksp_type chebyshev -mg_levels_esteig_ksp_type cg -mg_levels_esteig_ksp_max_it 10 -mg_levels_ksp_chebyshev_esteig 0,0.05,0,1.05 -mg_levels_pc_type sor -mat_block_size 1 -pc_gamg_use_agg_gasm -mg_levels_pc_type gasm -ksp_view

after these commits it ran with a valgrind error 

$ petscmpiexec -valgrind -n 2 ./ex10 -f0 ~/Datafiles/matrices/poisson2 -ksp_type cg -ksp_monitor_short -ksp_rtol 1.e-8 -pc_type gamg -pc_gamg_type agg -pc_gamg_agg_nsmooths 1 -pc_gamg_coarse_eq_limit 100 -pc_gamg_reuse_interpolation true -pc_gamg_square_graph 1 -pc_gamg_threshold 0.0 -ksp_converged_reason -use_mat_nearnullspace true -mg_levels_ksp_max_it 2 -mg_levels_ksp_type chebyshev -mg_levels_esteig_ksp_type cg -mg_levels_esteig_ksp_max_it 10 -mg_levels_ksp_chebyshev_esteig 0,0.05,0,1.05 -mg_levels_pc_type sor -mat_block_size 1 -pc_gamg_use_agg_gasm -mg_levels_pc_type gasm -ksp_view

==10897== Invalid write of size 8
==10897==    at 0x100E79764: PCSetUp_GASM (gasm.c:432)
==10897==    by 0x100F35D3E: PCSetUp (precon.c:984)
==10897==    by 0x101024208: KSPSetUp (itfunc.c:332)
==10897==    by 0x100F14059: PCSetUp_MG (mg.c:773)
==10897==    by 0x100E9C2C1: PCSetUp_GAMG (gamg.c:734)
==10897==    by 0x100F35D3E: PCSetUp (precon.c:984)
==10897==    by 0x101024208: KSPSetUp (itfunc.c:332)
==10897==    by 0x100005C48: main (in ./ex10)
==10897==  Address 0x103d8baa8 is 0 bytes after a block of size 3,880 alloc'd
==10897==    at 0x100019EBB: malloc (in /usr/local/Cellar/valgrind/3.11.0/lib/valgrind/vgpreload_memcheck-amd64-darwin.so)
==10897==    by 0x1000D925C: PetscMallocAlign (mal.c:34)
==10897==    by 0x1002BA3D2: VecCreate_MPI_Private (pbvec.c:491)
==10897==    by 0x1002BAEAC: VecCreate_MPI (pbvec.c:535)
==10897==    by 0x1002ECFBE: VecSetType (vecreg.c:53)
==10897==    by 0x1002BB2B0: VecCreate_Standard (pbvec.c:562)
==10897==    by 0x1002ECFBE: VecSetType (vecreg.c:53)
==10897==    by 0x1003F1A2F: MatCreateVecs (matrix.c:8653)
==10897==    by 0x100E78DE9: PCSetUp_GASM (gasm.c:396)
==10897==    by 0x100F35D3E: PCSetUp (precon.c:984)
==10897==    by 0x101024208: KSPSetUp (itfunc.c:332)
==10897==    by 0x100F14059: PCSetUp_MG (mg.c:773)
==10897==    by 0x100E9C2C1: PCSetUp_GAMG (gamg.c:734)
==10897==    by 0x100F35D3E: PCSetUp (precon.c:984)
==10897==    by 0x101024208: KSPSetUp (itfunc.c:332)
==10897==    by 0x100005C48: main (in ./ex10)
==10897== 
[1]PETSC ERROR: --------------------- Error Message --------------------------------------------------------------
[1]PETSC ERROR: Nonconforming object sizes
[1]PETSC ERROR: Vector wrong size 1078034432 for scatter 950 (scatter forward and vector to != ctx to size)
[1]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting.
[1]PETSC ERROR: Petsc Development GIT revision: v3.6.1-1135-ge12b472  GIT Date: 2015-09-26 21:43:36 -0600
[1]PETSC ERROR: ./ex10 on a arch-basic named Barrys-MacBook-Pro.local by barrysmith Fri Jun 24 20:44:37 2016
[1]PETSC ERROR: Configure options --with-fc=0 --download-sowing=0 --with-mpi-dir=/Users/barrysmith/libraries
[1]PETSC ERROR: #1 VecScatterBegin() line 1689 in /Users/barrysmith/Src/petsc/src/vec/vec/utils/vscat.c
[1]PETSC ERROR: #2 PCSetUp_GASM() line 437 in /Users/barrysmith/Src/petsc/src/ksp/pc/impls/gasm/gasm.c
[1]PETSC ERROR: #3 PCSetUp() line 984 in /Users/barrysmith/Src/petsc/src/ksp/pc/interface/precon.c
[1]PETSC ERROR: #4 KSPSetUp() line 332 in /Users/barrysmith/Src/petsc/src/ksp/ksp/interface/itfunc.c
[1]PETSC ERROR: #5 PCSetUp_MG() line 773 in /Users/barrysmith/Src/petsc/src/ksp/pc/impls/mg/mg.c
[1]PETSC ERROR: #6 PCSetUp_GAMG() line 734 in /Users/barrysmith/Src/petsc/src/ksp/pc/impls/gamg/gamg.c
[1]PETSC ERROR: #7 PCSetUp() line 984 in /Users/barrysmith/Src/petsc/src/ksp/pc/interface/precon.c
[1]PETSC ERROR: #8 KSPSetUp() line 332 in /Users/barrysmith/Src/petsc/src/ksp/ksp/interface/itfunc.c
[1]PETSC ERROR: #9 main() line 312 in /Users/barrysmith/Src/petsc/src/ksp/ksp/examples/tutorials/ex10.c
[1]PETSC ERROR: PETSc Option Table entries:
[1]PETSC ERROR: -f0 /Users/barrysmith/Datafiles/matrices/poisson2
[1]PETSC ERROR: -ksp_converged_reason
[1]PETSC ERROR: -ksp_monitor_short
[1]PETSC ERROR: -ksp_rtol 1.e-8
[1]PETSC ERROR: -ksp_type cg
[1]PETSC ERROR: -ksp_view
[1]PETSC ERROR: -malloc_test
[1]PETSC ERROR: -mat_block_size 1
[1]PETSC ERROR: -mg_levels_esteig_ksp_max_it 10
[1]PETSC ERROR: -mg_levels_esteig_ksp_type cg
[1]PETSC ERROR: -mg_levels_ksp_chebyshev_esteig 0,0.05,0,1.05
[1]PETSC ERROR: -mg_levels_ksp_max_it 2
[1]PETSC ERROR: -mg_levels_ksp_type chebyshev
[1]PETSC ERROR: -mg_levels_pc_type gasm
[1]PETSC ERROR: -pc_gamg_agg_nsmooths 1
[1]PETSC ERROR: -pc_gamg_coarse_eq_limit 100
[1]PETSC ERROR: -pc_gamg_reuse_interpolation true
[1]PETSC ERROR: -pc_gamg_square_graph 1
[1]PETSC ERROR: -pc_gamg_threshold 0.0
[1]PETSC ERROR: -pc_gamg_type agg
[1]PETSC ERROR: -pc_gamg_use_agg_gasm
[1]PETSC ERROR: -pc_type gamg
[1]PETSC ERROR: -use_mat_nearnullspace true
[1]PETSC ERROR: ----------------End of Error Message -------send entire error message to petsc-maint at mcs.anl.gov----------
application called MPI_Abort(MPI_COMM_WORLD, 60) - process 1
~/Src/petsc/src/ksp/ksp/examples/tutorials ((e12b472...)) arch-basic

in commit e12b4729b15ad5de2ed17324d40445da9b32e630

so you either did not make the vector you are accessing large enough or you are accessing its values incorrectly. Could you please see if you can determine the bug so we can get GASM back on track. I urge you to debug in the branch e12b4729b15ad5de2ed17324d40445da9b32e630 or before to find your error not in master since master has so many additional changes. Once you have found the bug we can work together to port the fix up to maint or master

   Thanks

   Barry





> On Jun 24, 2016, at 9:35 AM, Fande Kong <fdkong.jd at gmail.com> wrote:
> 
> 
> Message: 5
> Date: Fri, 24 Jun 2016 08:59:57 -0500
> From: Barry Smith <bsmith at mcs.anl.gov>
> To: Mark Adams <mfadams at lbl.gov>
> Cc: For users of the development version of PETSc
>         <petsc-dev at mcs.anl.gov>
> Subject: Re: [petsc-dev] asm / gasm
> Message-ID: <55FF6E27-46B0-4729-9A52-37C5ABBCC0CF at mcs.anl.gov>
> Content-Type: text/plain; charset="us-ascii"
> 
> 
> > On Jun 24, 2016, at 1:35 AM, Mark Adams <mfadams at lbl.gov> wrote:
> >
> >
> >
> > On Thu, Jun 23, 2016 at 11:46 PM, Barry Smith <bsmith at mcs.anl.gov> wrote:
> >
> >    Mark,
> >
> >     It is not as simple as this to convert to ASM. It will take a little bit of work to use ASM here instead of GASM.
> >
> >
> > Just to be clear: ASM used to work.  Did the semantics of ASM change?
> 
> Hi Mark,
> 
> Assume that you said GASM used to work, and now is not working any more.
> 
> GASM was originally written by Dmitry. The basic idea is to allow mulit-rank blocks, that is, a  mulit-rank subdomain problem could be solved using a small number of processor cores in parallel. This is different from ASM. 
> 
> I was involved into the development of GASM last summer. There are some changes:
> 
> (1) Added a function to increase overlap of the multi-rank subdomains. The function is called by GASM in default.
> 
> (2) Added a hierarchical partitioning to optimize data exchange. Ensure that small subdomains in a multi-rank subdomain are geometrically connected. GASM does not use this functionality in default. 
> 
> Any way, if you have an example (like Barry asked) showing the broken GASM, I will debug into it (of course, if Barry does not mind).
> 
> Fande Kong,
>  
> 
>    Show me a commit where ASM worked!  Do you mean that the GASM worked? The code has GASM calls in it, not ASM so how could ASM have previously worked? It is possible that something changed in GASM that broke GAMG's usage of GASM. Once you tell me how to reproduce the problem with GASM I can try to track down the problem.
> 
> >
> >     But before that please please tell me the command line argument and example you use where the GASM crashes so I can get that fixed. Then I will look at using ASM instead after I have the current GASM code running again.
> >
> > In branch mark/gamg-agg-asm in ksp ex56, 'make runex56':
> 
>    I don't care about this! This is where you have tried to change from GASM to ASM which I told you is non-trivial.  Give me the example and command line where the GASM version in master (or maint) doesn't work where the error message includes ** Max-trans not allowed because matrix is distributed
> 
>    We are not communicating very well, you jumped from stating GASM crashed to monkeying with ASM and now refuse to tell me how to reproduce the GASM crash. We have to start by fixing the current code to work with GASM (if it ever worked) and then move on to using ASM (which is just an optimization of the GASM usage.)
> 
> 
> Barry
> 
> 
> >
> > 14:12 nid00495  ~/petsc/src/ksp/ksp/examples/tutorials$ make runex56
> > [0]PETSC ERROR: --------------------- Error Message --------------------------------------------------------------
> > [0]PETSC ERROR: Petsc has generated inconsistent data
> > [0]PETSC ERROR: MPI_Allreduce() called in different locations (code lines) on different processors
> > [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting.
> > [0]PETSC ERROR: Petsc Development GIT revision: v3.7.2-633-g4f88208  GIT Date: 2016-06-23 18:53:31 +0200
> > [0]PETSC ERROR: /global/u2/m/madams/petsc/src/ksp/ksp/examples/tutorials/./ex56 on a arch-xc30-dbg64-intel named nid00495 by madams Thu Jun 23 14:12:57 2016
> > [0]PETSC ERROR: Configure options --COPTFLAGS="-no-ipo -g -O0" --CXXOPTFLAGS="-no-ipo -g -O0" --FOPTFLAGS="-fast -no-ipo -g -O0" --download-parmetis --download-metis --with-ssl=0 --with-cc=cc --with-clib-autodetect=0 --with-cxx=CC --with-cxxlib-autodetect=0 --with-debugging=1 --with-fc=0 --with-shared-libraries=0 --with-x=0 --with-mpiexec=srun LIBS=-lstdc++ --with-64-bit-indices PETSC_ARCH=arch-xc30-dbg64-intel
> > [0]PETSC ERROR: #1 MatGetSubMatrices_MPIAIJ() li
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >    Barry
> >
> >
> > > On Jun 23, 2016, at 4:19 PM, Mark Adams <mfadams at lbl.gov> wrote:
> > >
> > > The question boils down to, for empty processors do we:
> > >
> > > ierr = ISCreateGeneral(PETSC_COMM_SELF, 0, NULL, PETSC_COPY_VALUES, &is);CHKERRQ(ierr);
> > >           ierr = PCASMSetLocalSubdomains(subpc, 1, &is, NULL);CHKERRQ(ierr);
> > >           ierr = ISDestroy(&is);CHKERRQ(ierr);
> > >
> > > or
> > >
> > > PCASMSetLocalSubdomains(subpc, 0, NULL, NULL);
> > >
> > > The later gives and error that one domain is need and the later gives an error (appended).
> > >
> > > I've checked in the code for this second error in ksp (make runex56)
> > >
> > > Thanks,
> > >
> > > [0]PETSC ERROR: --------------------- Error Message --------------------------------------------------------------
> > > [0]PETSC ERROR: Petsc has generated inconsistent data
> > > [0]PETSC ERROR: MPI_Allreduce() called in different locations (code lines) on different processors
> > > [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting.
> > > [0]PETSC ERROR: Petsc Development GIT revision: v3.7.2-633-g4f88208  GIT Date: 2016-06-23 18:53:31 +0200
> > > [0]PETSC ERROR: /global/u2/m/madams/petsc/src/ksp/ksp/examples/tutorials/./ex56 on a arch-xc30-dbg64-intel named nid00495 by madams Thu Jun 23 14:12:57 2016
> > > [0]PETSC ERROR: Configure options --COPTFLAGS="-no-ipo -g -O0" --CXXOPTFLAGS="-no-ipo -g -O0" --FOPTFLAGS="-fast -no-ipo -g -O0" --download-parmetis --download-metis --with-ssl=0 --with-cc=cc --with-clib-autodetect=0 --with-cxx=CC --with-cxxlib-autodetect=0 --with-debugging=1 --with-fc=0 --with-shared-libraries=0 --with-x=0 --with-mpiexec=srun LIBS=-lstdc++ --with-64-bit-indices PETSC_ARCH=arch-xc30-dbg64-intel
> > > [0]PETSC ERROR: #1 MatGetSubMatrices_MPIAIJ() line 1147 in /global/u2/m/madams/petsc/src/mat/impls/aij/mpi/mpiov.c
> > > [0]PETSC ERROR: #2 MatGetSubMatrices_MPIAIJ() line 1147 in /global/u2/m/madams/petsc/src/mat/impls/aij/mpi/mpiov.c
> > > [1]PETSC ERROR: --------------------- Error Message --------------------------------------------------------------
> > >
> > > On Thu, Jun 23, 2016 at 8:05 PM, Barry Smith <bsmith at mcs.anl.gov> wrote:
> > >
> > >   Where is the command line that generates the error?
> > >
> > >
> > > > On Jun 23, 2016, at 12:08 AM, Mark Adams <mfadams at lbl.gov> wrote:
> > > >
> > > > [adding Garth]
> > > >
> > > > On Thu, Jun 23, 2016 at 12:52 AM, Barry Smith <bsmith at mcs.anl.gov> wrote:
> > > >
> > > >   Mark,
> > > >
> > > >    I think there is a misunderstanding here. With GASM an individual block problem is __solved__ (via a parallel KSP) in parallel by several processes, with ASM each block is "owned" by and solved on a single process.
> > > >
> > > > Ah, OK, so this is for multiple processors in a block. Yes, we are looking at small, smother, blocks.
> > > >
> > > >
> > > >    With both the "block" can come from any unknowns on any processes. You can have, for example a block that comes from a region snaking across several processes if you like (or it makes sense due to coupling in the matrix).
> > > >
> > > >    By default if you use ASM it will create one non-overlapping block defined by all unknowns owned by a single process and then extend it by "one level" (defined by the nonzero structure of the matrix) to get overlapping.
> > > >
> > > > The default in ASM is one level of overlap? That is new.  (OK, I have not looked at ASM in like over 10 years)
> > > >
> > > > If you use multiple blocks per process it defines the non-overlapping blocks within a single process's unknowns
> > > >
> > > > I assume this still chops the matrix and does not call a partitioner.
> > > >
> > > > and extends each of them to have overlap (again by the non-zero structure of the matrix). The default is simple because the user only need indicate the number of blocks per process, the drawback is of course that it does depend on the process layout, number of processes etc and does not take into account particular "coupling information" that the user may know about with their problem.
> > > >
> > > >   If the user wishes to defined the blocks themselves that is also possible with PCASMSetSubLocalSubdomains(). Each process provides 1 or more index sets for the subdomains it will solve on. Note that the index sets can contain any unknowns in the entire problem so the blocks do not have to "line up" with the parallel decomposition at all.
> > > >
> > > > Oh, OK, this is what I want. (I thought this worked).
> > > >
> > > > Of course determining and providing good such subdomains may not always be clear.
> > > >
> > > > In smoothed aggregation there is an argument that the aggregates are good, but the scale is fixed obviously.  On a regular grid smoothed aggregation wants 3^D sized aggregates, which is obviously wonderful for AMS.  And for anisotropy you want your ASM blocks to be on strongly connected components, which is what smoothed aggregation wants (not that I do this very well).
> > > >
> > > >
> > > >   I see in GAMG you have PCGAMGSetUseASMAggs
> > > >
> > > > But the code calls PCGASMSetSubdomains and the command line is -pc_gamg_use_agg_gasm, so this is all messed up.  (more below)
> > > >
> > > > which sadly does not have an explanation in the users manual and sadly does not have a matching options data base name -pc_gamg_use_agg_gasm  following the rule of drop the word set, all lower case, and put _ between words the option should be -pc_gamg_use_asm_aggs.
> > > >
> > > > BUT, THIS IS THE WAY IT WAS!  It looks like someone hijacked this code and made it gasm.  I never did this.
> > > >
> > > > Barry: you did this apparently in 2013.
> > > >
> > > >
> > > >    In addition to this one you could also have one that uses the aggs but use the PCASM to manage the solves instead of GASM, it would likely be less buggy and more efficient.
> > > >
> > > > yes
> > > >
> > > >
> > > >   Please tell me exactly what example you tried to run with what options and I will debug it.
> > > >
> > > > We got an error message:
> > > >
> > > > ** Max-trans not allowed because matrix is distributed
> > > >
> > > > Garth: is this from your code perhaps? I don't see it in PETSc.
> > > >
> > > > Note that ALL functionality that is included in PETSc should have tests that test that functionality then we will find out immediately when it is broken instead of two years later when it is much harder to debug. If this -pc_gamg_use_agg_gasm had had a test we won't be in this mess now. (Jed's damn code reviews sure don't pick up this stuff).
> > > >
> > > > First we need to change gasm to asm.
> > > >
> > > > We could add this argument pc_gamg_use_agg_asm  to ksp/ex56 (runex56 or make a new test).  The SNES version (also ex56) is my current test that I like to refer to as recommended parameters for elasticity. So I'd like to keep that clean, but we can add junk to ksp/ex56.
> > > >
> > > > I've done this in a branch mark/gamg-agg-asm.  I get an error (appended). It looks like the second coarsest grid, which has 36 dof on one processor has an index 36 in the block on every processor. Strange.  I can take a look at it later.
> > > >
> > > > Mark
> > > >
> > > > > [3]PETSC ERROR: [4]PETSC ERROR: --------------------- Error Message --------------------------------------------------------------
> > > > > [4]PETSC ERROR: Petsc has generated inconsistent data
> > > > > [4]PETSC ERROR: ith 0 block entry 36 not owned by any process, upper bound 36
> > > > > [4]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting.
> > > > > [4]PETSC ERROR: Petsc Development GIT revision: v3.7.2-630-g96e0c40  GIT Date: 2016-06-22 10:03:02 -0500
> > > > > [4]PETSC ERROR: ./ex56 on a arch-macosx-gnu-g named MarksMac-3.local by markadams Thu Jun 23 06:53:27 2016
> > > > > [4]PETSC ERROR: Configure options COPTFLAGS="-g -O0" CXXOPTFLAGS="-g -O0" FOPTFLAGS="-g -O0" --download-hypre=1 --download-parmetis=1 --download-metis=1 --download-ml=1 --download-p4est=1 --download-exodus=1 --download-triangle=1 --with-hdf5-dir=/Users/markadams/Codes/hdf5 --with-x=0 --with-debugging=1 PETSC_ARCH=arch-macosx-gnu-g --download-chaco
> > > > > [4]PETSC ERROR: #1 VecScatterCreate_PtoS() line 2348 in /Users/markadams/Codes/petsc/src/vec/vec/utils/vpscat.c
> > > > > [4]PETSC ERROR: #2 VecScatterCreate() line 1552 in /Users/markadams/Codes/petsc/src/vec/vec/utils/vscat.c
> > > > > [4]PETSC ERROR: Petsc has generated inconsistent data
> > > > > [3]PETSC ERROR: ith 0 block entry 36 not owned by any process, upper bound 36
> > > > > [3]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting.
> > > > > [3]PETSC ERROR: Petsc Development GIT revision: v3.7.2-630-g96e0c40  GIT Date: 2016-06-22 10:03:02 -0500
> > > > > [3]PETSC ERROR: ./ex56 on a arch-macosx-gnu-g named MarksMac-3.local by markadams Thu Jun 23 06:53:27 2016
> > > > > [3]PETSC ERROR: Configure options COPTFLAGS="-g -O0" CXXOPTFLAGS="-g -O0" FOPTFLAGS="-g -O0" --download-hypre=1 --download-parmetis=1 --download-metis=1 --download-ml=1 --download-p4est=1 --download-exodus=1 --download-triangle=1 --with-hdf5-dir=/Users/markadams/Codes/hdf5 --with-x=0 --with-debugging=1 PETSC_ARCH=arch-macosx-gnu-g --download-chaco
> > > > > [3]PETSC ERROR: #1 VecScatterCreate_PtoS() line 2348 in /Users/markadams/Codes/petsc/src/vec/vec/utils/vpscat.c
> > > > > [3]PETSC ERROR: #2 VecScatterCreate() line 1552 in /Users/markadams/Codes/petsc/src/vec/vec/utils/vscat.c
> > > > > [3]PETSC ERROR: #3 PCSetUp_ASM() line 279 in /Users/markadams/Codes/petsc/src/ksp/pc/impls/asm/asm.c
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >    Barry
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > > > On Jun 22, 2016, at 5:20 PM, Mark Adams <mfadams at lbl.gov> wrote:
> > > > >
> > > > >
> > > > >
> > > > > On Wed, Jun 22, 2016 at 8:06 PM, Barry Smith <bsmith at mcs.anl.gov> wrote:
> > > > >
> > > > >    I suggest focusing on asm.
> > > > >
> > > > > OK, I will switch gasm to asm, this does not work anyway.
> > > > >
> > > > > Having blocks that span multiple processes seems like over kill for a smoother ?
> > > > >
> > > > > No, because it is a pain to have the math convolved with the parallel decompositions strategy (ie, I can't tell an application how to partition their problem). If an aggregate spans processor boundaries, which is fine and needed, and let's say we have a pretty uniform problem, then if the block gets split up, H is small in part of the domain and convergence could suffer along processor boundaries.  And having the math change as the parallel decomposition changes is annoying.
> > > > >
> > > > > (Major league overkill) in fact doesn't one want multiple blocks per process, ie. pretty small blocks.
> > > > >
> > > > > No, it is just doing what would be done in serial.  If the cost of moving the data across the processor is a problem then that is a tradeoff to consider.
> > > > >
> > > > > And I think you are misunderstanding me.  There are lots of blocks per process (the aggregates are say 3^D in size).  And many of the aggregates/blocks along the processor boundary will be split between processors, resulting is mall blocks and weak ASM PC on processor boundaries.
> > > > >
> > > > > I can understand ASM not being general and not letting blocks span processor boundaries, but I don't think the extra matrix communication costs are a big deal (done just once) and the vector communication costs are not bad, it probably does not include (too many) new processors to communicate with.
> > > > >
> > > > >
> > > > >    Barry
> > > > >
> > > > > > On Jun 22, 2016, at 7:51 AM, Mark Adams <mfadams at lbl.gov> wrote:
> > > > > >
> > > > > > I'm trying to get block smoothers to work for gamg.  We (Garth) tried this and got this error:
> > > > > >
> > > > > >
> > > > > >  - Another option is use '-pc_gamg_use_agg_gasm true' and use '-mg_levels_pc_type gasm'.
> > > > > >
> > > > > >
> > > > > > Running in parallel, I get
> > > > > >
> > > > > >      ** Max-trans not allowed because matrix is distributed
> > > > > >  ----
> > > > > >
> > > > > > First, what is the difference between asm and gasm?
> > > > > >
> > > > > > Second, I need to fix this to get block smoothers. This used to work.  Did we lose the capability to have blocks that span processor subdomains?
> > > > > >
> > > > > > gamg only aggregates across processor subdomains within one layer, so maybe I could use one layer of overlap in some way?
> > > > > >
> > > > > > Thanks,
> > > > > > Mark
> > > > > >
> > > > >
> > > > >
> > > >
> > > >
> > >
> > >
> >
> >
> 
> 
> 
> ------------------------------
> 
> _______________________________________________
> petsc-dev mailing list
> petsc-dev at mcs.anl.gov
> https://lists.mcs.anl.gov/mailman/listinfo/petsc-dev
> 
> 
> End of petsc-dev Digest, Vol 90, Issue 30
> *****************************************
> 




More information about the petsc-dev mailing list