[petsc-users] PCMG behaviour on various levels
Jed Brown
jed at jedbrown.org
Wed Oct 8 09:52:32 CDT 2014
Filippo Leonardi <filippo.leonardi at sam.math.ethz.ch> writes:
> Hi,
>
> Quick PCMG question.
>
> Default behaviour of PCMG (I'm using 3D DMDAs) is to use all processes for
> every level and redundantly solve the coarse level locally (to my knowledge).
>
> However, if I want to keep the level numbers to be log_c N for (c=2^d,
> d=dimension, N = dof of finest mesh) so that the coarse level is alway the same
> size (say N_0) I'll occur in scaling problems. At some point, if I increase N
> and the number of processors (I do N \sim #proc), I'll obtain too many
> processes solving the coarsest levels (to the point where each processor has
> less than 1 dof).
This is indeed a problem for large process counts and DMDA does not
currently have a way to coarsen onto subcommunicators. I typically deal
with this by switching to algebraic multigrid on the coarsest levels,
e.g., -mg_coarse_pc_type gamg. GAMG manages aggregation to smaller
process sets automatically.
In more specific circumstances, such as HPGMG-FE, I stick with geometric
multigrid and restrict the communicator on the coarse grids. The
general approach could be ported to DMDA, though it is not trivial and
would break some existing code (that assumes the restriction is
distributed across the same process set).
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 818 bytes
Desc: not available
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20141008/d6c1b03e/attachment.pgp>
More information about the petsc-users
mailing list