<div>petsc-dev has PCGASM, which is a "generalization" of PCASM that allows for subdomains that live on a subcommunicator of the PC's communicator. The API is nearly identical to ASM's, and GASM will eventually replace ASM, once we are reasonably sure</div>
<div>
it works correctly (e.g., I'm chasing down a small memory leak in GASM at the moment).</div><div><br></div><div>The difficulty with subdomains straddling several ranks is that the user is responsible for generating these subdomains.</div>
<div>PCGASMCreateSubdomains2D is a helper subroutine that will produce a rank-straddling partition using DA-like data.</div><div>This is of limited use, since it works for structured 2D meshes only. The currently implemented partitioning "algorithm"</div>
<div>is sufficiently naive to serialize the subdomain solves. This can be improved, but in the absence of users I have not</div><div>made the time to do it.</div><div><br></div><div>The longer-term plan is to have an interface to various mesh packages to read the subdomain partition information from them</div>
<div>(in addition to the parallel partition). Similar functionality is required for FETI-like subdivisions, and I'm currently working on one of these mesh/partitioning hookups (initially, for MOAB). We can definitely help the particular application/user with using this functionality.</div>
<div><br></div><div>Dmitry.</div><div><br><div class="gmail_quote">On Mon, Jan 24, 2011 at 12:49 PM, Barry Smith <span dir="ltr"><<a href="mailto:bsmith@mcs.anl.gov" target="_blank">bsmith@mcs.anl.gov</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
Thomas,<br>
<br>
There is no way to have parallel subdomains in PETSc 3.1 for additive Schwarz but one of us has just added support in petsc-dev for exactly this approach. You can access petsc-dev via <a href="http://www.mcs.anl.gov/petsc/petsc-as/developers/index.html" target="_blank">http://www.mcs.anl.gov/petsc/petsc-as/developers/index.html</a> Since this is a new not yet released feature please join the mailing list <a href="mailto:petsc-dev@mcs.anl.gov" target="_blank">petsc-dev@mcs.anl.gov</a> <a href="http://www.mcs.anl.gov/petsc/petsc-as/miscellaneous/mailing-lists.html" target="_blank">http://www.mcs.anl.gov/petsc/petsc-as/miscellaneous/mailing-lists.html</a> and communicate issues regarding this top on that list.<br>
<br>
Barry<br>
<br>
<br>
<br>
On Jan 24, 2011, at 8:09 AM, DUFAUD THOMAS wrote:<br>
<br>
> Hi,<br>
> I noticed that the local solution of an ASM preconditioner is performed on a single processor per domain, usually setting a KSP PREONLY to perform an ILU factorization.<br>
> I would like to perform those local solution with a krylov method (GMRES) among a set of processors.<br>
><br>
> Is it possible, for an ASM preconditioner, to set a subgroup of processors per domain and then define parallel sub-solver over a sub-communicator?<br>
><br>
> If it is the case how can I manage operation such as MatIncreaseOverlap?<br>
> If it is not the case, does it exist a way to do that in PETSc?<br>
><br>
> Thanks,<br>
><br>
> Thomas<br>
<br>
</blockquote></div><br></div>