[petsc-dev] How do I collect all the values from a sequential vector on the zeroth processor into a parallel PETSc vector ?
Franck Houssen
franck.houssen at inria.fr
Wed Jan 3 10:47:30 CST 2018
How do I collect all the values from a sequential vector on the zeroth processor into a parallel PETSc vector ?
Shall I use VecScatterCreateToZero "backward", or, use VecScatterCreate instead ? If yes, how ? (my understanding is that VecScatterCreate can take only parallel vector as input)
Not sure how to do this.
Franck
In this example, the final VecView of the parallel globVec vector should be [-1., -2.]
>> mpirun -n 2 ./vecScatterGatherRoot.exe
Vec Object: 2 MPI processes
type: mpi
Process [0]
1.
Process [1]
2.
Vec Object: 1 MPI processes
type: seq
1.
2.
Vec Object: 1 MPI processes
type: seq
-1.
-2.
[1]PETSC ERROR: ------------------------------------------------------------------------
[1]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range
>> more vecScatterGatherRoot.cpp
// How do I collect all the values from a sequential vector on the zeroth processor into a parallel PETSc vector ?
//
// ~> g++ -o vecScatterGatherRoot.exe vecScatterGatherRoot.cpp -lpetsc -lm; mpirun -n X vecScatterGatherRoot.exe
#include "petsc.h"
int main(int argc,char **argv) {
PetscInitialize(&argc, &argv, NULL, NULL);
int size = 0; MPI_Comm_size(MPI_COMM_WORLD, &size);
int rank = 0; MPI_Comm_rank(MPI_COMM_WORLD, &rank);
PetscInt globSize = size;
Vec globVec; VecCreateMPI(PETSC_COMM_WORLD, 1, globSize, &globVec);
VecSetValue(globVec, rank, (PetscScalar) (1.+rank), INSERT_VALUES);
VecAssemblyBegin(globVec); VecAssemblyEnd(globVec);
VecView(globVec, PETSC_VIEWER_STDOUT_WORLD); PetscViewerFlush(PETSC_VIEWER_STDOUT_WORLD);
// Collect all the values from a parallel PETSc vector into a vector on the zeroth processor.
Vec locVec = NULL;
if (rank == 0) {
PetscInt locSize = globSize;
VecCreateSeq(PETSC_COMM_SELF, locSize, &locVec); VecSet(locVec, -1.);
}
VecScatter scatCtx; VecScatterCreateToZero(globVec, &scatCtx, &locVec);
VecScatterBegin(scatCtx, globVec, locVec, INSERT_VALUES, SCATTER_FORWARD);
VecScatterEnd (scatCtx, globVec, locVec, INSERT_VALUES, SCATTER_FORWARD);
// Modify sequential vector on the zeroth processor.
if (rank == 0) {
VecView(locVec, PETSC_VIEWER_STDOUT_SELF); PetscViewerFlush(PETSC_VIEWER_STDOUT_SELF);
VecScale(locVec, -1.);
VecView(locVec, PETSC_VIEWER_STDOUT_SELF); PetscViewerFlush(PETSC_VIEWER_STDOUT_SELF);
}
MPI_Barrier(MPI_COMM_WORLD);
// How do I collect all the values from a sequential vector on the zeroth processor into a parallel PETSc vector ?
VecSet(globVec, 0.);
VecScatterBegin(scatCtx, locVec, globVec, ADD_VALUES, SCATTER_REVERSE);
VecScatterEnd (scatCtx, locVec, globVec, ADD_VALUES, SCATTER_REVERSE);
VecView(globVec, PETSC_VIEWER_STDOUT_WORLD); PetscViewerFlush(PETSC_VIEWER_STDOUT_WORLD);
PetscFinalize();
}
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20180103/510d0170/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: vecScatterGatherRoot.cpp
Type: text/x-c++src
Size: 1953 bytes
Desc: not available
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20180103/510d0170/attachment.bin>
More information about the petsc-dev
mailing list