[petsc-dev] How do I collect all the values from a sequential vector on the zeroth processor into a parallel PETSc vector ?

Smith, Barry F. bsmith at mcs.anl.gov
Thu Jan 4 11:30:51 CST 2018


  I just forward it to you there times, if you don't get it then send me an alternative email to use. It is a small message and should not get blocked or delayed.


> On Jan 4, 2018, at 11:27 AM, Franck Houssen <franck.houssen at inria.fr> wrote:
> 
> 
> 
> ----- Mail original -----
>> De: "Barry F. Smith" <bsmith at mcs.anl.gov>
>> À: "Franck Houssen" <franck.houssen at inria.fr>
>> Cc: "Barry F. Smith" <bsmith at mcs.anl.gov>, "For users of the development version of PETSc" <petsc-dev at mcs.anl.gov>
>> Envoyé: Jeudi 4 Janvier 2018 18:13:32
>> Objet: Re: [petsc-dev] How do I collect all the values from a sequential vector on the zeroth processor into a
>> parallel PETSc vector ?
>> 
>> 
>> 
>>> On Jan 4, 2018, at 11:10 AM, Franck Houssen <franck.houssen at inria.fr>
>>> wrote:
>>> 
>>> Do you reproduce the error at your side ? Or is the pattern I use wrong ?
>> 
>>   You should have received an email from me with a patch for PETSc code.
>>   Yes I could reproduce, clearly our bug your code was fine.
> 
> Didn't get the (other) mail yet: guess I'll get it tomorrow. The mail server seems to be messy/capricious... I also noticed some attachments may be "cut" if you open them from your web browser (?!). Anyway...
> OK. bug fixed : good news ! I'm on maint, I guess you pushed on master and I need to apply the patch on maint: no problem.
> 
> Thanks Barry
> 
>> 
>>  Barry
>> 
>>> 
>>> Franck
>>> 
>>> ----- Mail original -----
>>>> De: "Franck Houssen" <franck.houssen at inria.fr>
>>>> À: "Barry F. Smith" <bsmith at mcs.anl.gov>
>>>> Cc: "For users of the development version of PETSc"
>>>> <petsc-dev at mcs.anl.gov>
>>>> Envoyé: Jeudi 4 Janvier 2018 09:31:05
>>>> Objet: Re: [petsc-dev] How do I collect all the values from a sequential
>>>> vector on the zeroth processor into a
>>>> parallel PETSc vector ?
>>>> 
>>>> I attached it in the very first mail.
>>>> 
>>>> Franck
>>>> 
>>>>>> more vecScatterGatherRoot.cpp
>>>> // How do I collect all the values from a sequential vector on the zeroth
>>>> processor into a parallel PETSc vector ?
>>>> //
>>>> // ~> g++ -o vecScatterGatherRoot.exe vecScatterGatherRoot.cpp -lpetsc
>>>> -lm;
>>>> mpirun -n X vecScatterGatherRoot.exe
>>>> 
>>>> #include "petsc.h"
>>>> 
>>>> int main(int argc,char **argv) {
>>>> PetscInitialize(&argc, &argv, NULL, NULL);
>>>> int size = 0; MPI_Comm_size(MPI_COMM_WORLD, &size);
>>>> int rank = 0; MPI_Comm_rank(MPI_COMM_WORLD, &rank);
>>>> 
>>>> PetscInt globSize = size;
>>>> Vec globVec; VecCreateMPI(PETSC_COMM_WORLD, 1, globSize, &globVec);
>>>> VecSetValue(globVec, rank, (PetscScalar) (1.+rank), INSERT_VALUES);
>>>> VecAssemblyBegin(globVec); VecAssemblyEnd(globVec);
>>>> VecView(globVec, PETSC_VIEWER_STDOUT_WORLD);
>>>> PetscViewerFlush(PETSC_VIEWER_STDOUT_WORLD);
>>>> 
>>>> // Collect all the values from a parallel PETSc vector into a vector on
>>>> the
>>>> zeroth processor.
>>>> 
>>>> Vec locVec = NULL;
>>>> if (rank == 0) {
>>>>   PetscInt locSize = globSize;
>>>>   VecCreateSeq(PETSC_COMM_SELF, locSize, &locVec); VecSet(locVec, -1.);
>>>> }
>>>> VecScatter scatCtx; VecScatterCreateToZero(globVec, &scatCtx, &locVec);
>>>> VecScatterBegin(scatCtx, globVec, locVec, INSERT_VALUES,
>>>> SCATTER_FORWARD);
>>>> VecScatterEnd  (scatCtx, globVec, locVec, INSERT_VALUES,
>>>> SCATTER_FORWARD);
>>>> 
>>>> // Modify sequential vector on the zeroth processor.
>>>> 
>>>> if (rank == 0) {
>>>>   VecView(locVec, PETSC_VIEWER_STDOUT_SELF);
>>>>   PetscViewerFlush(PETSC_VIEWER_STDOUT_SELF);
>>>>   VecScale(locVec, -1.);
>>>>   VecView(locVec, PETSC_VIEWER_STDOUT_SELF);
>>>>   PetscViewerFlush(PETSC_VIEWER_STDOUT_SELF);
>>>> }
>>>> MPI_Barrier(MPI_COMM_WORLD);
>>>> 
>>>> // How do I collect all the values from a sequential vector on the zeroth
>>>> processor into a parallel PETSc vector ?
>>>> 
>>>> VecSet(globVec, 0.);
>>>> VecScatterBegin(scatCtx, locVec, globVec, ADD_VALUES, SCATTER_REVERSE);
>>>> VecScatterEnd  (scatCtx, locVec, globVec, ADD_VALUES, SCATTER_REVERSE);
>>>> VecView(globVec, PETSC_VIEWER_STDOUT_WORLD);
>>>> PetscViewerFlush(PETSC_VIEWER_STDOUT_WORLD);
>>>> 
>>>> PetscFinalize();
>>>> }
>>>> 
>>>> 
>>>> ----- Mail original -----
>>>>> De: "Barry F. Smith" <bsmith at mcs.anl.gov>
>>>>> À: "Franck Houssen" <franck.houssen at inria.fr>
>>>>> Cc: "Barry F. Smith" <bsmith at mcs.anl.gov>, "Matthew Knepley"
>>>>> <knepley at gmail.com>, "For users of the development
>>>>> version of PETSc" <petsc-dev at mcs.anl.gov>
>>>>> Envoyé: Mercredi 3 Janvier 2018 18:20:21
>>>>> Objet: Re: [petsc-dev] How do I collect all the values from a sequential
>>>>> vector on the zeroth processor into a
>>>>> parallel PETSc vector ?
>>>>> 
>>>>> 
>>>>> Send the complete code as an attachment.
>>>>> 
>>>>> 
>>>>>> On Jan 3, 2018, at 11:08 AM, Franck Houssen <franck.houssen at inria.fr>
>>>>>> wrote:
>>>>>> 
>>>>>> ----- Mail original -----
>>>>>>> De: "Barry F. Smith" <bsmith at mcs.anl.gov>
>>>>>>> À: "Franck Houssen" <franck.houssen at inria.fr>
>>>>>>> Cc: "Matthew Knepley" <knepley at gmail.com>, "For users of the
>>>>>>> development
>>>>>>> version of PETSc" <petsc-dev at mcs.anl.gov>
>>>>>>> Envoyé: Mercredi 3 Janvier 2018 18:01:35
>>>>>>> Objet: Re: [petsc-dev] How do I collect all the values from a
>>>>>>> sequential
>>>>>>> vector on the zeroth processor into a
>>>>>>> parallel PETSc vector ?
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>>> On Jan 3, 2018, at 10:59 AM, Franck Houssen <franck.houssen at inria.fr>
>>>>>>>> wrote:
>>>>>>>> 
>>>>>>>> I need the exact opposite operation of an entry in the FAQ called "How
>>>>>>>> do
>>>>>>>> I
>>>>>>>> collect all the values from a parallel PETSc vector into a vector on
>>>>>>>> the
>>>>>>>> zeroth processor?"
>>>>>>> 
>>>>>>> You can use VecScatterCreateToZero() then do the scatter with scatter
>>>>>>> reverse.
>>>>>>> 
>>>>>> 
>>>>>> That's what I tried but got an error. Did I miss something ?
>>>>>> 
>>>>>>>> tail vecScatterGatherRoot.cpp
>>>>>> 
>>>>>> // How do I collect all the values from a sequential vector on the
>>>>>> zeroth
>>>>>> processor into a parallel PETSc vector ?
>>>>>> 
>>>>>> VecSet(globVec, 0.);
>>>>>> VecScatterBegin(scatCtx, locVec, globVec, ADD_VALUES, SCATTER_REVERSE);
>>>>>> VecScatterEnd  (scatCtx, locVec, globVec, ADD_VALUES, SCATTER_REVERSE);
>>>>>> VecView(globVec, PETSC_VIEWER_STDOUT_WORLD);
>>>>>> PetscViewerFlush(PETSC_VIEWER_STDOUT_WORLD);
>>>>>> 
>>>>>> PetscFinalize();
>>>>>> }
>>>>>> 
>>>>>>>> mpirun -n 2 ./vecScatterGatherRoot.exe
>>>>>> Vec Object: 2 MPI processes
>>>>>> type: mpi
>>>>>> Process [0]
>>>>>> 1.
>>>>>> Process [1]
>>>>>> 2.
>>>>>> Vec Object: 1 MPI processes
>>>>>> type: seq
>>>>>> 1.
>>>>>> 2.
>>>>>> Vec Object: 1 MPI processes
>>>>>> type: seq
>>>>>> -1.
>>>>>> -2.
>>>>>> ...
>>>>>> [1]PETSC ERROR: [1] VecScatterBegin_MPI_ToOne line 161
>>>>>> /home/fghoussen/Documents/INRIA/petsc/src/vec/vec/utils/vscat.c
>>>>>> [1]PETSC ERROR: [1] VecScatterBegin line 1698
>>>>>> /home/fghoussen/Documents/INRIA/petsc/src/vec/vec/utils/vscat.c
>>>>>> [1]PETSC ERROR: --------------------- Error Message
>>>>>> --------------------------------------------------------------
>>>>>> [1]PETSC ERROR: Signal received
>>>>>> [1]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html
>>>>>> for
>>>>>> trouble shooting.
>>>>>> 
>>>>>> 
>>>>>>>> 
>>>>>>>> Shall I use VecScatterCreateToZero "backward", or, use
>>>>>>>> VecScatterCreate
>>>>>>>> instead ? If yes, how ? (my understanding is that VecScatterCreate can
>>>>>>>> take only parallel vector as input)
>>>>>>> 
>>>>>>> This understanding is completely incorrect.
>>>>>>> 
>>>>>>> Barry
>>>>>>> 
>>>>>>> 
>>>>>>>> Not sure how to do this.
>>>>>>>> 
>>>>>>>> Its not clear what you want. Do you want a seq vector duplicated on
>>>>>>>> all
>>>>>>>> procs?
>>>>>>>> 
>>>>>>>> No. The seq vec should "live" only on the master proc. This seq master
>>>>>>>> vec
>>>>>>>> should be "converted" into a parallel vector. Not sure how to do that
>>>>>>>> 
>>>>>>>> Do you want it split up? The short
>>>>>>>> answer is, use the appropriate scatter.
>>>>>>>> 
>>>>>>>> Matt
>>>>>>>> 
>>>>>>>> Franck
>>>>>>>> 
>>>>>>>> In this example, the final VecView of the parallel globVec vector
>>>>>>>> should
>>>>>>>> be
>>>>>>>> [-1., -2.]
>>>>>>>> 
>>>>>>>>>> mpirun -n 2 ./vecScatterGatherRoot.exe
>>>>>>>> Vec Object: 2 MPI processes
>>>>>>>> type: mpi
>>>>>>>> Process [0]
>>>>>>>> 1.
>>>>>>>> Process [1]
>>>>>>>> 2.
>>>>>>>> Vec Object: 1 MPI processes
>>>>>>>> type: seq
>>>>>>>> 1.
>>>>>>>> 2.
>>>>>>>> Vec Object: 1 MPI processes
>>>>>>>> type: seq
>>>>>>>> -1.
>>>>>>>> -2.
>>>>>>>> [1]PETSC ERROR:
>>>>>>>> ------------------------------------------------------------------------
>>>>>>>> [1]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation,
>>>>>>>> probably memory access out of range
>>>>>>>> 
>>>>>>>>>> more vecScatterGatherRoot.cpp
>>>>>>>> // How do I collect all the values from a sequential vector on the
>>>>>>>> zeroth
>>>>>>>> processor into a parallel PETSc vector ?
>>>>>>>> //
>>>>>>>> // ~> g++ -o vecScatterGatherRoot.exe vecScatterGatherRoot.cpp -lpetsc
>>>>>>>> -lm;
>>>>>>>> mpirun -n X vecScatterGatherRoot.exe
>>>>>>>> 
>>>>>>>> #include "petsc.h"
>>>>>>>> 
>>>>>>>> int main(int argc,char **argv) {
>>>>>>>> PetscInitialize(&argc, &argv, NULL, NULL);
>>>>>>>> int size = 0; MPI_Comm_size(MPI_COMM_WORLD, &size);
>>>>>>>> int rank = 0; MPI_Comm_rank(MPI_COMM_WORLD, &rank);
>>>>>>>> 
>>>>>>>> PetscInt globSize = size;
>>>>>>>> Vec globVec; VecCreateMPI(PETSC_COMM_WORLD, 1, globSize, &globVec);
>>>>>>>> VecSetValue(globVec, rank, (PetscScalar) (1.+rank), INSERT_VALUES);
>>>>>>>> VecAssemblyBegin(globVec); VecAssemblyEnd(globVec);
>>>>>>>> VecView(globVec, PETSC_VIEWER_STDOUT_WORLD);
>>>>>>>> PetscViewerFlush(PETSC_VIEWER_STDOUT_WORLD);
>>>>>>>> 
>>>>>>>> // Collect all the values from a parallel PETSc vector into a vector
>>>>>>>> on
>>>>>>>> the zeroth processor.
>>>>>>>> 
>>>>>>>> Vec locVec = NULL;
>>>>>>>> if (rank == 0) {
>>>>>>>>  PetscInt locSize = globSize;
>>>>>>>>  VecCreateSeq(PETSC_COMM_SELF, locSize, &locVec); VecSet(locVec,
>>>>>>>>  -1.);
>>>>>>>> }
>>>>>>>> VecScatter scatCtx; VecScatterCreateToZero(globVec, &scatCtx,
>>>>>>>> &locVec);
>>>>>>>> VecScatterBegin(scatCtx, globVec, locVec, INSERT_VALUES,
>>>>>>>> SCATTER_FORWARD);
>>>>>>>> VecScatterEnd  (scatCtx, globVec, locVec, INSERT_VALUES,
>>>>>>>> SCATTER_FORWARD);
>>>>>>>> 
>>>>>>>> // Modify sequential vector on the zeroth processor.
>>>>>>>> 
>>>>>>>> if (rank == 0) {
>>>>>>>>  VecView(locVec, PETSC_VIEWER_STDOUT_SELF);
>>>>>>>>  PetscViewerFlush(PETSC_VIEWER_STDOUT_SELF);
>>>>>>>>  VecScale(locVec, -1.);
>>>>>>>>  VecView(locVec, PETSC_VIEWER_STDOUT_SELF);
>>>>>>>>  PetscViewerFlush(PETSC_VIEWER_STDOUT_SELF);
>>>>>>>> }
>>>>>>>> MPI_Barrier(MPI_COMM_WORLD);
>>>>>>>> 
>>>>>>>> // How do I collect all the values from a sequential vector on the
>>>>>>>> zeroth
>>>>>>>> processor into a parallel PETSc vector ?
>>>>>>>> 
>>>>>>>> VecSet(globVec, 0.);
>>>>>>>> VecScatterBegin(scatCtx, locVec, globVec, ADD_VALUES,
>>>>>>>> SCATTER_REVERSE);
>>>>>>>> VecScatterEnd  (scatCtx, locVec, globVec, ADD_VALUES,
>>>>>>>> SCATTER_REVERSE);
>>>>>>>> VecView(globVec, PETSC_VIEWER_STDOUT_WORLD);
>>>>>>>> PetscViewerFlush(PETSC_VIEWER_STDOUT_WORLD);
>>>>>>>> 
>>>>>>>> PetscFinalize();
>>>>>>>> }
>>>>>>>> 
>>>>>>>> 
>>>>>>>> 
>>>>>>>> 
>>>>>>>> --
>>>>>>>> What most experimenters take for granted before they begin their
>>>>>>>> experiments is infinitely more interesting than any results to which
>>>>>>>> their
>>>>>>>> experiments lead.
>>>>>>>> -- Norbert Wiener
>>>>>>>> 
>>>>>>>> https://www.cse.buffalo.edu/~knepley/
>>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>> 
>>>>> 
>>>> 
>> 
>> 



More information about the petsc-dev mailing list