[petsc-users] Best way to scatter a Seq vector ?

Barry Smith bsmith at mcs.anl.gov
Thu Jan 5 18:39:12 CST 2017


> On Jan 5, 2017, at 6:21 PM, Manuel Valera <mvalera at mail.sdsu.edu> wrote:
> 
> Hello Devs is me again,
> 
> I'm trying to distribute a vector to all called processes, the vector would be originally in root as a sequential vector and i would like to scatter it, what would the best call to do this ?
> 
> I already know how to gather a distributed vector to root with VecScatterCreateToZero, this would be the inverse operation, 

   Use the same VecScatter object but with SCATTER_REVERSE, not you need to reverse the two vector arguments as well.


> i'm currently trying with VecScatterCreate() and as of now im doing the following:
> 
> 
> if(rank==0)then
> 
> 
>      call VecCreate(PETSC_COMM_SELF,bp0,ierr); CHKERRQ(ierr) !if i use WORLD 
>                                                              !freezes in SetSizes
>      call VecSetSizes(bp0,PETSC_DECIDE,nbdp,ierr); CHKERRQ(ierr)
>      call VecSetType(bp0,VECSEQ,ierr)
>      call VecSetFromOptions(bp0,ierr); CHKERRQ(ierr)
> 
> 
>      call VecSetValues(bp0,nbdp,ind,Rhs,INSERT_VALUES,ierr)
> 
>      !call VecSet(bp0,5.0D0,ierr); CHKERRQ(ierr)
> 
> 
>      call VecView(bp0,PETSC_VIEWER_STDOUT_WORLD,ierr)
> 
>      call VecAssemblyBegin(bp0,ierr) ; call VecAssemblyEnd(bp0,ierr) !rhs
> 
>      do i=0,nbdp-1,1
>         ind(i+1) = i
>      enddo
> 
>      call ISCreateGeneral(PETSC_COMM_SELF,nbdp,ind,PETSC_COPY_VALUES,locis,ierr)
> 
>     !call VecScatterCreate(bp0,PETSC_NULL_OBJECT,bp2,is,ctr,ierr) !if i use SELF 
>                                                                   !freezes here.
> 
>      call VecScatterCreate(bp0,locis,bp2,PETSC_NULL_OBJECT,ctr,ierr)
> 
> endif
> 
> bp2 being the receptor MPI vector to scatter to
> 
> But it freezes in VecScatterCreate when trying to use more than one processor, what would be a better approach ?
> 
> 
> Thanks once again,
> 
> Manuel
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> On Wed, Jan 4, 2017 at 3:30 PM, Manuel Valera <mvalera at mail.sdsu.edu> wrote:
> Thanks i had no idea how to debug and read those logs, that solved this issue at least (i was sending a message from root to everyone else, but trying to catch from everyone else including root)
> 
> Until next time, many thanks,
> 
> Manuel
> 
> On Wed, Jan 4, 2017 at 3:23 PM, Matthew Knepley <knepley at gmail.com> wrote:
> On Wed, Jan 4, 2017 at 5:21 PM, Manuel Valera <mvalera at mail.sdsu.edu> wrote:
> I did a PetscBarrier just before calling the vicariate routine and im pretty sure im calling it from every processor, code looks like this:
> 
> From the gdb trace.
> 
>   Proc 0: Is in some MPI routine you call yourself, line 113
> 
>   Proc 1: Is in VecCreate(), line 130
> 
> You need to fix your communication code.
> 
>    Matt
>  
> call PetscBarrier(PETSC_NULL_OBJECT,ierr)
> 
> print*,'entering POInit from',rank
> !call exit()
> 
> call PetscObjsInit()
> 
> 
> And output gives:
> 
>  entering POInit from           0
>  entering POInit from           1
>  entering POInit from           2
>  entering POInit from           3
> 
> 
> Still hangs in the same way,
> 
> Thanks,
> 
> Manuel
> 
>  
> 
> On Wed, Jan 4, 2017 at 2:55 PM, Manuel Valera <mvalera at mail.sdsu.edu> wrote:
> Thanks for the answers ! 
> 
> heres the screenshot of what i got from bt in gdb (great hint in how to debug in petsc, didn't know that)
> 
> I don't really know what to look at here, 
> 
> Thanks,
> 
> Manuel
> 
> On Wed, Jan 4, 2017 at 2:39 PM, Dave May <dave.mayhem23 at gmail.com> wrote:
> Are you certain ALL ranks in PETSC_COMM_WORLD call these function(s). These functions cannot be inside if statements like
> if (rank == 0){
>   VecCreateMPI(...)
> }
> 
> 
> On Wed, 4 Jan 2017 at 23:34, Manuel Valera <mvalera at mail.sdsu.edu> wrote:
> Thanks Dave for the quick answer, appreciate it,
> 
> I just tried that and it didn't make a difference, any other suggestions ?
> 
> Thanks,
> Manuel
> 
> On Wed, Jan 4, 2017 at 2:29 PM, Dave May <dave.mayhem23 at gmail.com> wrote:
> You need to swap the order of your function calls.
> Call VecSetSizes() before VecSetType()
> 
> Thanks,
>   Dave
> 
> 
> On Wed, 4 Jan 2017 at 23:21, Manuel Valera <mvalera at mail.sdsu.edu> wrote:
> Hello all, happy new year,
> 
> I'm working on parallelizing my code, it worked and provided some results when i just called more than one processor, but created artifacts because i didn't need one image of the whole program in each processor, conflicting with each other. 
> 
> Since the pressure solver is the main part i need in parallel im chosing mpi to run everything in root processor until its time to solve for pressure, at this point im trying to create a distributed vector using either 
> 
>      call VecCreateMPI(PETSC_COMM_WORLD,PETSC_DECIDE,nbdp,xp,ierr)
> or
>      call VecCreate(PETSC_COMM_WORLD,xp,ierr); CHKERRQ(ierr)
>      call VecSetType(xp,VECMPI,ierr)  
>      call VecSetSizes(xp,PETSC_DECIDE,nbdp,ierr); CHKERRQ(ierr)
> 
> 
> In both cases program hangs at this point, something it never happened on the naive way i described before. I've made sure the global size, nbdp, is the same in every processor. What can be wrong?
> 
> Thanks for your kind help,
> 
> Manuel.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> -- 
> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.
> -- Norbert Wiener
> 
> 



More information about the petsc-users mailing list