<html><body><div style="font-family: times new roman, new york, times, serif; font-size: 12pt; color: #000000"><div>How do I collect all the values from a sequential vector on the zeroth processor into a parallel PETSc vector ?<br></div><div>Shall I use VecScatterCreateToZero "backward", or, use VecScatterCreate instead ? If yes, how ? (my understanding is that VecScatterCreate can take only parallel vector as input)</div><div>Not sure how to do this.<br></div><div><br></div><div>Franck<br></div><div><br></div><div>In this example, the final VecView of the parallel globVec vector should be [-1., -2.]<br></div><div><br></div><div>>> mpirun -n 2 ./vecScatterGatherRoot.exe <br></div><div>Vec Object: 2 MPI processes<br> type: mpi<br>Process [0]<br>1.<br>Process [1]<br>2.<br>Vec Object: 1 MPI processes<br> type: seq<br>1.<br>2.<br>Vec Object: 1 MPI processes<br> type: seq<br>-1.<br>-2.<br>[1]PETSC ERROR: ------------------------------------------------------------------------<br>[1]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range<br><br></div><div>>> more vecScatterGatherRoot.cpp <br>// How do I collect all the values from a sequential vector on the zeroth processor into a parallel PETSc vector ?<br>//<br>// ~> g++ -o vecScatterGatherRoot.exe vecScatterGatherRoot.cpp -lpetsc -lm; mpirun -n X vecScatterGatherRoot.exe<br><br>#include "petsc.h"<br><br>int main(int argc,char **argv) {<br> PetscInitialize(&argc, &argv, NULL, NULL);<br> int size = 0; MPI_Comm_size(MPI_COMM_WORLD, &size);<br> int rank = 0; MPI_Comm_rank(MPI_COMM_WORLD, &rank);<br><br> PetscInt globSize = size;<br> Vec globVec; VecCreateMPI(PETSC_COMM_WORLD, 1, globSize, &globVec);<br> VecSetValue(globVec, rank, (PetscScalar) (1.+rank), INSERT_VALUES);<br> VecAssemblyBegin(globVec); VecAssemblyEnd(globVec);<br> VecView(globVec, PETSC_VIEWER_STDOUT_WORLD); PetscViewerFlush(PETSC_VIEWER_STDOUT_WORLD);<br><br> // Collect all the values from a parallel PETSc vector into a vector on the zeroth processor.<br><br> Vec locVec = NULL;<br> if (rank == 0) {<br> PetscInt locSize = globSize;<br> VecCreateSeq(PETSC_COMM_SELF, locSize, &locVec); VecSet(locVec, -1.);<br> }<br> VecScatter scatCtx; VecScatterCreateToZero(globVec, &scatCtx, &locVec);<br> VecScatterBegin(scatCtx, globVec, locVec, INSERT_VALUES, SCATTER_FORWARD);<br> VecScatterEnd (scatCtx, globVec, locVec, INSERT_VALUES, SCATTER_FORWARD);<br><br> // Modify sequential vector on the zeroth processor.<br><br> if (rank == 0) {<br> VecView(locVec, PETSC_VIEWER_STDOUT_SELF); PetscViewerFlush(PETSC_VIEWER_STDOUT_SELF);<br> VecScale(locVec, -1.);<br> VecView(locVec, PETSC_VIEWER_STDOUT_SELF); PetscViewerFlush(PETSC_VIEWER_STDOUT_SELF);<br> }<br> MPI_Barrier(MPI_COMM_WORLD);<br><br> // How do I collect all the values from a sequential vector on the zeroth processor into a parallel PETSc vector ?<br><br> VecSet(globVec, 0.);<br> VecScatterBegin(scatCtx, locVec, globVec, ADD_VALUES, SCATTER_REVERSE);<br> VecScatterEnd (scatCtx, locVec, globVec, ADD_VALUES, SCATTER_REVERSE);<br> VecView(globVec, PETSC_VIEWER_STDOUT_WORLD); PetscViewerFlush(PETSC_VIEWER_STDOUT_WORLD);<br><br> PetscFinalize();<br>}<br><br></div></div></body></html>