[petsc-users] Combine distributed vector into local vector
Anthony Ruth
Anthony.J.Ruth.12 at nd.edu
Fri Jan 19 13:25:25 CST 2018
Hello, I am working on converting a serial code to use Petsc. To do this it
starts in serial with the old code, creates some Petsc matrices/vectors and
does the most critical calculations in Petsc, and then returns to serial to
return to the old code. After this I was planning to expand the area of the
code which uses Petsc, but starting with a small chunk seemed appropriate.
To return to the serial code, I need to have every process receive a full
copy of a vector. I believe VecScatters are intended for this, but I cannot
get it to work. I have produced a minimal example which starts with a
serial double array, creates a Vec which is successfully distributed, and
then tries to use a VecScatter to aggregate the vector from each process.
After the VecScatter step, each process only has the parts of the vector
which were available to the process in the distributed vector. If you could
please show how to aggregate the vector, it would be greatly appreciated.
Use with: mpiexec -n 2 ./test:
#include <petscvec.h>
double primes[] = {2,3,5,7,11,13,17};
int nprimes = 7;
int main(int argc,char **argv)
{
PetscInitialize(&argc,&argv, NULL,NULL);
MPI_Comm comm=MPI_COMM_WORLD;
Vec xpar,xseq;
PetscInt low,high;
IS index_set_global, index_set_local;
const PetscInt *indices;
VecScatter vc;
PetscErrorCode ierr;
//Set up parallel vector
ierr = VecCreateMPI(comm, PETSC_DETERMINE, nprimes, &xpar);
CHKERRQ(ierr);
ierr = VecGetOwnershipRange(xpar, &low, &high); CHKERRQ(ierr);
ierr = ISCreateStride(comm, high - low, low, 1, &index_set_global);
CHKERRQ(ierr);
ierr = ISGetIndices(index_set_global, &indices); CHKERRQ(ierr);
ierr = ISView(index_set_global, PETSC_VIEWER_STDOUT_WORLD);
CHKERRQ(ierr);
ierr = VecSetValues(xpar, high - low, indices, primes + low,
INSERT_VALUES);CHKERRQ(ierr);
ierr = VecAssemblyBegin(xpar); CHKERRQ(ierr);
ierr = VecAssemblyEnd(xpar); CHKERRQ(ierr);
ierr = VecView(xpar, PETSC_VIEWER_STDOUT_WORLD); CHKERRQ(ierr);
//Scatter parallel vector so all processes have full vector
ierr = VecCreateSeq(PETSC_COMM_SELF, nprimes, &xseq); CHKERRQ(ierr);
//ierr = VecCreateMPI(comm, high - low, nprimes, &xseq); CHKERRQ(ierr);
ierr = ISCreateStride(comm, high - low, 0, 1, &index_set_local);
CHKERRQ(ierr);
ierr = VecScatterCreate(xpar, index_set_local, xseq, index_set_global,
&vc); CHKERRQ(ierr);
ierr = VecScatterBegin(vc, xpar, xseq, ADD_VALUES, SCATTER_FORWARD);
CHKERRQ(ierr);
ierr = VecScatterEnd(vc, xpar, xseq, ADD_VALUES, SCATTER_FORWARD);
CHKERRQ(ierr);
ierr = PetscPrintf(PETSC_COMM_SELF, "\nPrinting out scattered
vector\n"); CHKERRQ(ierr);
ierr = VecView(xseq, PETSC_VIEWER_STDOUT_WORLD); CHKERRQ(ierr);
PetscFinalize();
}
Output:
IS Object: 2 MPI processes
type: stride
[0] Index set is permutation
[0] Number of indices in (stride) set 4
[0] 0 0
[0] 1 1
[0] 2 2
[0] 3 3
[1] Number of indices in (stride) set 3
[1] 0 4
[1] 1 5
[1] 2 6
Vec Object: 2 MPI processes
type: mpi
Process [0]
2.
3.
5.
7.
Process [1]
11.
13.
17.
Printing out scattered vector
Printing out scattered vector
Vec Object: 1 MPI processes
type: seq
2.
3.
5.
7.
0.
0.
0.
Anthony Ruth
Condensed Matter Theory
University of Notre Dame
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20180119/4cc549f1/attachment.html>
More information about the petsc-users
mailing list