[petsc-users] DIstribute a global vector

Jed Brown jedbrown at mcs.anl.gov
Sat Oct 8 08:15:21 CDT 2011


On Sat, Oct 8, 2011 at 05:04, Dominik Szczerba <dominik at itis.ethz.ch> wrote:

> I have my parallel layout and I can easily gather my MPI Vec's
> components into one sequential Vec on root process using
> VecScatterCreateToZero. Works perfectly.
>
> But now I want the opposite: "VecScatterCreateFromZero". There is no
> such function, but it well depicts my intention: I know my parallel
> layout, I have a sequential Vec on root process that I want to be
> partitioned and distributed to all processes.


http://www.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-dev/docs/manualpages/Vec/VecScatterCreateToZero.html

then

http://www.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-dev/docs/manualpages/Vec/SCATTER_REVERSE.html



> After studying all
> VecScatter* functions I remain unsure how to best accomplish it, e.g.
> VecScatterCreateToAll sounds promising but seems to scatter the whole
> vector while I need to scatter only relevant processor chunks. I think
> along these lines:
>
> // arrayGlobal is a sequential vector on root with known size and in
> application ordering.
> // arrayLocal is a MPI vector with known global and local sizes.
> IS ix;
> // Fill ix on each process with global ID's this process owns
> ierr = VecScatterCreate(arrayGlobal, ix, arrayLocal, PETSC_NULL, &scatter);
>
> Is this right or is there a better/more elegant way?
>
> Many thanks for any hints,
> Dominik
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20111008/d3bc5738/attachment.htm>


More information about the petsc-users mailing list