<html><body><div style="font-family: times new roman, new york, times, serif; font-size: 12pt; color: #000000"><div>Got it, tested it : all is OK ! Thanks guys.<br></div><div><br></div><div>Franck<br></div><div><br></div><hr id="zwchr"><blockquote style="border-left:2px solid #1010FF;margin-left:5px;padding-left:5px;color:#000;font-weight:normal;font-style:normal;text-decoration:none;font-family:Helvetica,Arial,sans-serif;font-size:12pt;" data-mce-style="border-left: 2px solid #1010FF; margin-left: 5px; padding-left: 5px; color: #000; font-weight: normal; font-style: normal; text-decoration: none; font-family: Helvetica,Arial,sans-serif; font-size: 12pt;"><b>De: </b>"Barry F. Smith" <bsmith@mcs.anl.gov><br><b>À: </b>"Franck Houssen" <franck.houssen@inria.fr><br><b>Cc: </b>"Barry F. Smith" <bsmith@mcs.anl.gov>, "Matthew Knepley" <knepley@gmail.com>, "For users of the development version of PETSc" <petsc-dev@mcs.anl.gov><br><b>Envoyé: </b>Jeudi 4 Janvier 2018 18:10:03<br><b>Objet: </b>Re: [petsc-dev] How do I collect all the values from a sequential vector on the zeroth processor into a parallel PETSc vector ?<br><div><br></div><div class="BodyFragment"><div class="PlainText"><br> Frank,<br> <br> This is our bug. I have attached a patch and also fixed it in the PETSc repository master branch.<br> <br> apply with <br> <br> patch -p1 < franck.patch<br> <br> if you are using the tarball version of PETSc.<br> <br> <br> Barry<br></div></div><div class="BodyFragment"><div class="PlainText"><br> <br> > On Jan 4, 2018, at 2:31 AM, Franck Houssen <franck.houssen@inria.fr> wrote:<br> > <br> > I attached it in the very first mail.<br> > <br> > Franck<br> > <br> >>> more vecScatterGatherRoot.cpp <br> > // How do I collect all the values from a sequential vector on the zeroth processor into a parallel PETSc vector ?<br> > //<br> > // ~> g++ -o vecScatterGatherRoot.exe vecScatterGatherRoot.cpp -lpetsc -lm; mpirun -n X vecScatterGatherRoot.exe<br> > <br> > #include "petsc.h"<br> > <br> > int main(int argc,char **argv) {<br> > PetscInitialize(&argc, &argv, NULL, NULL);<br> > int size = 0; MPI_Comm_size(MPI_COMM_WORLD, &size);<br> > int rank = 0; MPI_Comm_rank(MPI_COMM_WORLD, &rank);<br> > <br> > PetscInt globSize = size;<br> > Vec globVec; VecCreateMPI(PETSC_COMM_WORLD, 1, globSize, &globVec);<br> > VecSetValue(globVec, rank, (PetscScalar) (1.+rank), INSERT_VALUES);<br> > VecAssemblyBegin(globVec); VecAssemblyEnd(globVec);<br> > VecView(globVec, PETSC_VIEWER_STDOUT_WORLD); PetscViewerFlush(PETSC_VIEWER_STDOUT_WORLD);<br> > <br> > // Collect all the values from a parallel PETSc vector into a vector on the zeroth processor.<br> > <br> > Vec locVec = NULL;<br> > if (rank == 0) {<br> > PetscInt locSize = globSize;<br> > VecCreateSeq(PETSC_COMM_SELF, locSize, &locVec); VecSet(locVec, -1.);<br> > }<br> > VecScatter scatCtx; VecScatterCreateToZero(globVec, &scatCtx, &locVec);<br> > VecScatterBegin(scatCtx, globVec, locVec, INSERT_VALUES, SCATTER_FORWARD);<br> > VecScatterEnd (scatCtx, globVec, locVec, INSERT_VALUES, SCATTER_FORWARD);<br> > <br> > // Modify sequential vector on the zeroth processor.<br> > <br> > if (rank == 0) {<br> > VecView(locVec, PETSC_VIEWER_STDOUT_SELF); PetscViewerFlush(PETSC_VIEWER_STDOUT_SELF);<br> > VecScale(locVec, -1.);<br> > VecView(locVec, PETSC_VIEWER_STDOUT_SELF); PetscViewerFlush(PETSC_VIEWER_STDOUT_SELF);<br> > }<br> > MPI_Barrier(MPI_COMM_WORLD);<br> > <br> > // How do I collect all the values from a sequential vector on the zeroth processor into a parallel PETSc vector ?<br> > <br> > VecSet(globVec, 0.);<br> > VecScatterBegin(scatCtx, locVec, globVec, ADD_VALUES, SCATTER_REVERSE);<br> > VecScatterEnd (scatCtx, locVec, globVec, ADD_VALUES, SCATTER_REVERSE);<br> > VecView(globVec, PETSC_VIEWER_STDOUT_WORLD); PetscViewerFlush(PETSC_VIEWER_STDOUT_WORLD);<br> > <br> > PetscFinalize();<br> > }<br> > <br> > <br> > ----- Mail original -----<br> >> De: "Barry F. Smith" <bsmith@mcs.anl.gov><br> >> À: "Franck Houssen" <franck.houssen@inria.fr><br> >> Cc: "Barry F. Smith" <bsmith@mcs.anl.gov>, "Matthew Knepley" <knepley@gmail.com>, "For users of the development<br> >> version of PETSc" <petsc-dev@mcs.anl.gov><br> >> Envoyé: Mercredi 3 Janvier 2018 18:20:21<br> >> Objet: Re: [petsc-dev] How do I collect all the values from a sequential vector on the zeroth processor into a<br> >> parallel PETSc vector ?<br> >> <br> >> <br> >> Send the complete code as an attachment.<br> >> <br> >> <br> >>> On Jan 3, 2018, at 11:08 AM, Franck Houssen <franck.houssen@inria.fr><br> >>> wrote:<br> >>> <br> >>> ----- Mail original -----<br> >>>> De: "Barry F. Smith" <bsmith@mcs.anl.gov><br> >>>> À: "Franck Houssen" <franck.houssen@inria.fr><br> >>>> Cc: "Matthew Knepley" <knepley@gmail.com>, "For users of the development<br> >>>> version of PETSc" <petsc-dev@mcs.anl.gov><br> >>>> Envoyé: Mercredi 3 Janvier 2018 18:01:35<br> >>>> Objet: Re: [petsc-dev] How do I collect all the values from a sequential<br> >>>> vector on the zeroth processor into a<br> >>>> parallel PETSc vector ?<br> >>>> <br> >>>> <br> >>>> <br> >>>>> On Jan 3, 2018, at 10:59 AM, Franck Houssen <franck.houssen@inria.fr><br> >>>>> wrote:<br> >>>>> <br> >>>>> I need the exact opposite operation of an entry in the FAQ called "How do<br> >>>>> I<br> >>>>> collect all the values from a parallel PETSc vector into a vector on the<br> >>>>> zeroth processor?"<br> >>>> <br> >>>> You can use VecScatterCreateToZero() then do the scatter with scatter<br> >>>> reverse.<br> >>>> <br> >>> <br> >>> That's what I tried but got an error. Did I miss something ?<br> >>> <br> >>>>> tail vecScatterGatherRoot.cpp<br> >>> <br> >>> // How do I collect all the values from a sequential vector on the zeroth<br> >>> processor into a parallel PETSc vector ?<br> >>> <br> >>> VecSet(globVec, 0.);<br> >>> VecScatterBegin(scatCtx, locVec, globVec, ADD_VALUES, SCATTER_REVERSE);<br> >>> VecScatterEnd (scatCtx, locVec, globVec, ADD_VALUES, SCATTER_REVERSE);<br> >>> VecView(globVec, PETSC_VIEWER_STDOUT_WORLD);<br> >>> PetscViewerFlush(PETSC_VIEWER_STDOUT_WORLD);<br> >>> <br> >>> PetscFinalize();<br> >>> }<br> >>> <br> >>>>> mpirun -n 2 ./vecScatterGatherRoot.exe<br> >>> Vec Object: 2 MPI processes<br> >>> type: mpi<br> >>> Process [0]<br> >>> 1.<br> >>> Process [1]<br> >>> 2.<br> >>> Vec Object: 1 MPI processes<br> >>> type: seq<br> >>> 1.<br> >>> 2.<br> >>> Vec Object: 1 MPI processes<br> >>> type: seq<br> >>> -1.<br> >>> -2.<br> >>> ...<br> >>> [1]PETSC ERROR: [1] VecScatterBegin_MPI_ToOne line 161<br> >>> /home/fghoussen/Documents/INRIA/petsc/src/vec/vec/utils/vscat.c<br> >>> [1]PETSC ERROR: [1] VecScatterBegin line 1698<br> >>> /home/fghoussen/Documents/INRIA/petsc/src/vec/vec/utils/vscat.c<br> >>> [1]PETSC ERROR: --------------------- Error Message<br> >>> --------------------------------------------------------------<br> >>> [1]PETSC ERROR: Signal received<br> >>> [1]PETSC ERROR: See <a href="http://www.mcs.anl.gov/petsc/documentation/faq.html" target="_blank" data-mce-href="http://www.mcs.anl.gov/petsc/documentation/faq.html"> http://www.mcs.anl.gov/petsc/documentation/faq.html</a> for<br> >>> trouble shooting.<br> >>> <br> >>> <br> >>>>> <br> >>>>> Shall I use VecScatterCreateToZero "backward", or, use VecScatterCreate<br> >>>>> instead ? If yes, how ? (my understanding is that VecScatterCreate can<br> >>>>> take only parallel vector as input)<br> >>>> <br> >>>> This understanding is completely incorrect.<br> >>>> <br> >>>> Barry<br> >>>> <br> >>>> <br> >>>>> Not sure how to do this.<br> >>>>> <br> >>>>> Its not clear what you want. Do you want a seq vector duplicated on all<br> >>>>> procs?<br> >>>>> <br> >>>>> No. The seq vec should "live" only on the master proc. This seq master<br> >>>>> vec<br> >>>>> should be "converted" into a parallel vector. Not sure how to do that<br> >>>>> <br> >>>>> Do you want it split up? The short<br> >>>>> answer is, use the appropriate scatter.<br> >>>>> <br> >>>>> Matt<br> >>>>> <br> >>>>> Franck<br> >>>>> <br> >>>>> In this example, the final VecView of the parallel globVec vector should<br> >>>>> be<br> >>>>> [-1., -2.]<br> >>>>> <br> >>>>>>> mpirun -n 2 ./vecScatterGatherRoot.exe<br> >>>>> Vec Object: 2 MPI processes<br> >>>>> type: mpi<br> >>>>> Process [0]<br> >>>>> 1.<br> >>>>> Process [1]<br> >>>>> 2.<br> >>>>> Vec Object: 1 MPI processes<br> >>>>> type: seq<br> >>>>> 1.<br> >>>>> 2.<br> >>>>> Vec Object: 1 MPI processes<br> >>>>> type: seq<br> >>>>> -1.<br> >>>>> -2.<br> >>>>> [1]PETSC ERROR:<br> >>>>> ------------------------------------------------------------------------<br> >>>>> [1]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation,<br> >>>>> probably memory access out of range<br> >>>>> <br> >>>>>>> more vecScatterGatherRoot.cpp<br> >>>>> // How do I collect all the values from a sequential vector on the zeroth<br> >>>>> processor into a parallel PETSc vector ?<br> >>>>> //<br> >>>>> // ~> g++ -o vecScatterGatherRoot.exe vecScatterGatherRoot.cpp -lpetsc<br> >>>>> -lm;<br> >>>>> mpirun -n X vecScatterGatherRoot.exe<br> >>>>> <br> >>>>> #include "petsc.h"<br> >>>>> <br> >>>>> int main(int argc,char **argv) {<br> >>>>> PetscInitialize(&argc, &argv, NULL, NULL);<br> >>>>> int size = 0; MPI_Comm_size(MPI_COMM_WORLD, &size);<br> >>>>> int rank = 0; MPI_Comm_rank(MPI_COMM_WORLD, &rank);<br> >>>>> <br> >>>>> PetscInt globSize = size;<br> >>>>> Vec globVec; VecCreateMPI(PETSC_COMM_WORLD, 1, globSize, &globVec);<br> >>>>> VecSetValue(globVec, rank, (PetscScalar) (1.+rank), INSERT_VALUES);<br> >>>>> VecAssemblyBegin(globVec); VecAssemblyEnd(globVec);<br> >>>>> VecView(globVec, PETSC_VIEWER_STDOUT_WORLD);<br> >>>>> PetscViewerFlush(PETSC_VIEWER_STDOUT_WORLD);<br> >>>>> <br> >>>>> // Collect all the values from a parallel PETSc vector into a vector on<br> >>>>> the zeroth processor.<br> >>>>> <br> >>>>> Vec locVec = NULL;<br> >>>>> if (rank == 0) {<br> >>>>> PetscInt locSize = globSize;<br> >>>>> VecCreateSeq(PETSC_COMM_SELF, locSize, &locVec); VecSet(locVec, -1.);<br> >>>>> }<br> >>>>> VecScatter scatCtx; VecScatterCreateToZero(globVec, &scatCtx, &locVec);<br> >>>>> VecScatterBegin(scatCtx, globVec, locVec, INSERT_VALUES,<br> >>>>> SCATTER_FORWARD);<br> >>>>> VecScatterEnd (scatCtx, globVec, locVec, INSERT_VALUES,<br> >>>>> SCATTER_FORWARD);<br> >>>>> <br> >>>>> // Modify sequential vector on the zeroth processor.<br> >>>>> <br> >>>>> if (rank == 0) {<br> >>>>> VecView(locVec, PETSC_VIEWER_STDOUT_SELF);<br> >>>>> PetscViewerFlush(PETSC_VIEWER_STDOUT_SELF);<br> >>>>> VecScale(locVec, -1.);<br> >>>>> VecView(locVec, PETSC_VIEWER_STDOUT_SELF);<br> >>>>> PetscViewerFlush(PETSC_VIEWER_STDOUT_SELF);<br> >>>>> }<br> >>>>> MPI_Barrier(MPI_COMM_WORLD);<br> >>>>> <br> >>>>> // How do I collect all the values from a sequential vector on the<br> >>>>> zeroth<br> >>>>> processor into a parallel PETSc vector ?<br> >>>>> <br> >>>>> VecSet(globVec, 0.);<br> >>>>> VecScatterBegin(scatCtx, locVec, globVec, ADD_VALUES, SCATTER_REVERSE);<br> >>>>> VecScatterEnd (scatCtx, locVec, globVec, ADD_VALUES, SCATTER_REVERSE);<br> >>>>> VecView(globVec, PETSC_VIEWER_STDOUT_WORLD);<br> >>>>> PetscViewerFlush(PETSC_VIEWER_STDOUT_WORLD);<br> >>>>> <br> >>>>> PetscFinalize();<br> >>>>> }<br> >>>>> <br> >>>>> <br> >>>>> <br> >>>>> <br> >>>>> --<br> >>>>> What most experimenters take for granted before they begin their<br> >>>>> experiments is infinitely more interesting than any results to which<br> >>>>> their<br> >>>>> experiments lead.<br> >>>>> -- Norbert Wiener<br> >>>>> <br> >>>>> <a href="https://www.cse.buffalo.edu/~knepley/" target="_blank" data-mce-href="https://www.cse.buffalo.edu/~knepley/">https://www.cse.buffalo.edu/~knepley/</a><br> >>>>> <br> >>>> <br> >>>> <br> >> <br> >> <br> > <vecScatterGatherRoot.cpp><br> <br></div></div></blockquote><div><br></div></div></body></html>