[petsc-users] A way to distribute 3D arrays.

Barry Smith bsmith at mcs.anl.gov
Wed Feb 15 21:31:50 CST 2017


  You can do this in the style of 

 VecLoad_Binary_DA()

first you take your sequential vector and make it parallel in the "natural" ordering for 3d arrays

  DMDACreateNaturalVector(da,&natural);
  VecScatterCreateToZero(natural,&scatter,&veczero);  /* veczero is of full size on process 0 and has zero entries on all other processes*/
  /* fill up veczero */
   VecScatterBegin(scatter,veczero,natural,INSERT_VALUES,SCATTER_REVERSE);
VecScatterEnd(scatter,veczero,natural,INSERT_VALUES,SCATTER_REVERSE);

 and then move it into the PETSc DMDA parallel ordering vector with 

  ierr = DMCreateGlobalVector(da,&xin);CHKERRQ(ierr);
  ierr = DMDANaturalToGlobalBegin(da,natural,INSERT_VALUES,xin);CHKERRQ(ierr);
  ierr = DMDANaturalToGlobalEnd(da,natural,INSERT_VALUES,xin);CHKERRQ(ierr);


> On Feb 15, 2017, at 7:16 PM, Manuel Valera <mvalera at mail.sdsu.edu> wrote:
> 
> Hello,
> 
> My question this time is just if there is a way to distribute a 3D array whos located at Zero rank over the processors, if possible using the DMDAs, i'm trying not to do a lot of initialization I/O in parallel.
> 
> Thanks for your time,
> 
> Manuel



More information about the petsc-users mailing list