[petsc-dev] How do I collect all the values from a sequential vector on the zeroth processor into a parallel PETSc vector ?
Matthew Knepley
knepley at gmail.com
Wed Jan 3 10:51:02 CST 2018
On Wed, Jan 3, 2018 at 11:47 AM, Franck Houssen <franck.houssen at inria.fr>
wrote:
> How do I collect all the values from a sequential vector on the zeroth
> processor into a parallel PETSc vector ?
> Shall I use VecScatterCreateToZero "backward", or, use VecScatterCreate
> instead ? If yes, how ? (my understanding is that VecScatterCreate can take
> only parallel vector as input)
> Not sure how to do this.
>
Its not clear what you want. Do you want a seq vector duplicated on all
procs? Do you want it split up? The short
answer is, use the appropriate scatter.
Matt
>
> Franck
>
> In this example, the final VecView of the parallel globVec vector should
> be [-1., -2.]
>
> >> mpirun -n 2 ./vecScatterGatherRoot.exe
> Vec Object: 2 MPI processes
> type: mpi
> Process [0]
> 1.
> Process [1]
> 2.
> Vec Object: 1 MPI processes
> type: seq
> 1.
> 2.
> Vec Object: 1 MPI processes
> type: seq
> -1.
> -2.
> [1]PETSC ERROR: ------------------------------
> ------------------------------------------
> [1]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation,
> probably memory access out of range
>
> >> more vecScatterGatherRoot.cpp
> // How do I collect all the values from a sequential vector on the zeroth
> processor into a parallel PETSc vector ?
> //
> // ~> g++ -o vecScatterGatherRoot.exe vecScatterGatherRoot.cpp -lpetsc
> -lm; mpirun -n X vecScatterGatherRoot.exe
>
> #include "petsc.h"
>
> int main(int argc,char **argv) {
> PetscInitialize(&argc, &argv, NULL, NULL);
> int size = 0; MPI_Comm_size(MPI_COMM_WORLD, &size);
> int rank = 0; MPI_Comm_rank(MPI_COMM_WORLD, &rank);
>
> PetscInt globSize = size;
> Vec globVec; VecCreateMPI(PETSC_COMM_WORLD, 1, globSize, &globVec);
> VecSetValue(globVec, rank, (PetscScalar) (1.+rank), INSERT_VALUES);
> VecAssemblyBegin(globVec); VecAssemblyEnd(globVec);
> VecView(globVec, PETSC_VIEWER_STDOUT_WORLD);
> PetscViewerFlush(PETSC_VIEWER_STDOUT_WORLD);
>
> // Collect all the values from a parallel PETSc vector into a vector on
> the zeroth processor.
>
> Vec locVec = NULL;
> if (rank == 0) {
> PetscInt locSize = globSize;
> VecCreateSeq(PETSC_COMM_SELF, locSize, &locVec); VecSet(locVec, -1.);
> }
> VecScatter scatCtx; VecScatterCreateToZero(globVec, &scatCtx, &locVec);
> VecScatterBegin(scatCtx, globVec, locVec, INSERT_VALUES,
> SCATTER_FORWARD);
> VecScatterEnd (scatCtx, globVec, locVec, INSERT_VALUES,
> SCATTER_FORWARD);
>
> // Modify sequential vector on the zeroth processor.
>
> if (rank == 0) {
> VecView(locVec, PETSC_VIEWER_STDOUT_SELF);
> PetscViewerFlush(PETSC_VIEWER_STDOUT_SELF);
> VecScale(locVec, -1.);
> VecView(locVec, PETSC_VIEWER_STDOUT_SELF);
> PetscViewerFlush(PETSC_VIEWER_STDOUT_SELF);
> }
> MPI_Barrier(MPI_COMM_WORLD);
>
> // How do I collect all the values from a sequential vector on the
> zeroth processor into a parallel PETSc vector ?
>
> VecSet(globVec, 0.);
> VecScatterBegin(scatCtx, locVec, globVec, ADD_VALUES, SCATTER_REVERSE);
> VecScatterEnd (scatCtx, locVec, globVec, ADD_VALUES, SCATTER_REVERSE);
> VecView(globVec, PETSC_VIEWER_STDOUT_WORLD);
> PetscViewerFlush(PETSC_VIEWER_STDOUT_WORLD);
>
> PetscFinalize();
> }
>
>
--
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
https://www.cse.buffalo.edu/~knepley/ <http://www.caam.rice.edu/~mk51/>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20180103/956cc145/attachment.html>
More information about the petsc-dev
mailing list