Putting a peculiar multilevel scheme into DMMG?
Jed Brown
jed at 59A2.org
Sun Dec 13 18:41:50 CST 2009
On Sun, 13 Dec 2009 18:06:04 -0600, Matthew Knepley <knepley at gmail.com> wrote:
> I will explain the history since I am to blame for this one. I needed
> coarsenHierarchy because repeated coarsening can cause degradation of
> the meshes. Thus all coarse meshes must be made at once. I added the
> hierarchy option, but only the coarsen branch was used.
I suspected something like that. So one real difference with hierarchy
is that (in principle) DMRefine can move to a different communicator.
Is this flexibility used (it seems like this could be quite powerful for
additive multigrid, but I'm not aware of such a thing working now)? If
so, DMRefineHierarchy and DMCoarsenHierarchy should also take an array
of communicators (passing NULL would mean to use the same communicator).
> > DMCoarsenHierarchy is implemented for Mesh, but the array is currently
> > leaking (DMMGSetDM forgets a PetscFree). Is it indeed preferred that
> > the caller does not allocate the array (despite knowing how much is
> > needed) but is responsible for freeing it (I ask because this is clumsy
> > for a single level of refinement). Either way, I'll document the choice
> > and fix the leak.
> >
>
> Cool. I guess we could have caller allocation, but that is harder to
> check for correctness.
The alternative (unless we want two ways to do the same thing) seems
much worse:
DM *tmp;
DMRefineHierarchy(dmc,1,&tmp);
dmf = *tmp;
PetscFree(tmp);
Jed
More information about the petsc-dev
mailing list