[petsc-users] Reduce DMDA's

Matthew Knepley knepley at gmail.com
Thu Sep 25 09:18:24 CDT 2014


On Thu, Sep 25, 2014 at 2:43 AM, Filippo Leonardi <
filippo.leonardi at sam.math.ethz.ch> wrote:

> Hi,
>
> Let's say I have some independent DMDAs, created as follows:
>
>     MPI_Comm_split(MPI_COMM_WORLD, comm_rank % 2, 0, &newcomm);
>
>     DM da;
>     DMDACreate2d(newcomm, DM_BOUNDARY_PERIODIC, DM_BOUNDARY_NONE,
>                                         DMDA_STENCIL_BOX ,50, 50,
> PETSC_DECIDE, PETSC_DECIDE,
>                                         1, 1, NULL, NULL,&da);
>
> For instance for 4 processors I get 2 DMDA's. Now, I want to reduce (in the
> sense of MPI) the global/local DMDA vectors to only one of the MPI groups
> (say
> group 0). Is there an elegant way (e.g. with Scatters) to do that?
>
> My current implementation would be: get the local array on each process and
> reduce (with MPI_Reduce) to the root of each partition.
>
> DMDA for Group 1:
> +------+------+
> | 0    | 1    |
> |      |      |
> +------+------+
> | 2    | 3    |
> |      |      |
> +------+------+
> DMDA for Group 2:
> +------+------+
> | 4    | 5    |
> |      |      |
> +------+------+
> | 6    | 7    |
> |      |      |
> +------+------+
>
> Reduce rank 0 and 4 to rank 0.
> Reduce rank 1 and 5 to rank 1.
> Reduce rank 2 and 6 to rank 2.
> Reduce rank 3 and 7 to rank 3.
>
> Clearly this implementation is cumbersome. Any idea?
>

I think that is the simplest way to do it, and its is 3 calls
VecGet/RestoreArray()
and MPI_Reduce().

  Matt


> Best,
> Filippo




-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20140925/cb43176b/attachment.html>


More information about the petsc-users mailing list