[petsc-dev] [petsc-users] 2 levels of parallelism for ASM

Dmitry Karpeev karpeev at mcs.anl.gov
Mon Jan 24 15:16:07 CST 2011


petsc-dev has PCGASM, which is a "generalization" of PCASM that  allows for
subdomains that live on a subcommunicator of the PC's communicator.  The API
is nearly identical to ASM's, and GASM will eventually replace ASM, once we
are reasonably sure
 it works correctly (e.g., I'm chasing down a small memory leak in GASM at
the moment).

The difficulty with subdomains straddling several ranks is that the user is
responsible for generating these subdomains.
PCGASMCreateSubdomains2D is a helper subroutine that will produce a
rank-straddling partition using DA-like data.
This is of limited use, since it works for structured 2D meshes only.  The
currently implemented partitioning "algorithm"
is sufficiently naive to serialize the subdomain solves.  This can be
improved, but in the absence of users I have not
made the time to do it.

The longer-term plan is to have an interface to various mesh packages to
read the subdomain partition information from them
(in addition to the parallel partition). Similar functionality is required
for FETI-like subdivisions, and I'm currently working on one of these
mesh/partitioning hookups (initially, for MOAB). We can definitely help the
particular application/user with using this functionality.

Dmitry.

On Mon, Jan 24, 2011 at 12:49 PM, Barry Smith <bsmith at mcs.anl.gov> wrote:

>
>  Thomas,
>
>     There is no way to have parallel subdomains in PETSc 3.1 for additive
> Schwarz but one of us has just added support in petsc-dev for exactly this
> approach. You can access petsc-dev via
> http://www.mcs.anl.gov/petsc/petsc-as/developers/index.html   Since this
> is a new not yet released feature please join the mailing list
> petsc-dev at mcs.anl.gov
> http://www.mcs.anl.gov/petsc/petsc-as/miscellaneous/mailing-lists.html and
> communicate issues regarding this top on that list.
>
>   Barry
>
>
>
> On Jan 24, 2011, at 8:09 AM, DUFAUD THOMAS wrote:
>
> > Hi,
> > I noticed that the local solution of an ASM preconditioner is performed
> on a single processor per domain, usually setting a KSP PREONLY to perform
> an ILU factorization.
> > I would like to perform those local solution with a krylov method (GMRES)
> among a set of processors.
> >
> > Is it possible, for an ASM preconditioner, to set a subgroup of
> processors per domain and then define parallel sub-solver over a
> sub-communicator?
> >
> > If it is the case how can I manage operation such as MatIncreaseOverlap?
> > If it is not the case, does it exist a way to do that in PETSc?
> >
> > Thanks,
> >
> > Thomas
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20110124/e1eb46a2/attachment.html>


More information about the petsc-dev mailing list