[petsc-users] A way to distribute 3D arrays.
Manuel Valera
mvalera at mail.sdsu.edu
Sat Feb 18 14:44:13 CST 2017
thanks guys that helped a lot!
I think i got it know, i copy the code i created in case you want to
suggest something or maybe use it as example...
I have a bonus question: Im going to operate next in several arrays at the
same time, i created them using their slightly different layouts, one DA
each, a peer told me different DA are not guaranteed to share the same
corners, is this correct? if so, is there a way to enforce that? Thanks
SUBROUTINE DAs(da,veczero,localv,array)
! use PetscObjects, only :: ierr
! Umbrella program to update and communicate the arrays in a
! distributed fashion using the DMDA objects from PETSc.
! Manuel Valera 1/20/17
! Arguments:
! da = DMDA array (3D) already created and setup
! veczero =
! globalv =
! localv = local chunk each processor works in.
! array = the array to be petscified. ONLY 3D ARRAYS as now.
! arrayp = the petsc version of the array, may be not needed.
! cta = the scatter context to translate globalv to localv in DA
indices
Vec,intent(inout) :: veczero,localv
Vec :: natural,globalv
PetscScalar,dimension(:,:,:) :: array
PetscScalar,pointer,dimension(:,:,:) :: arrayp
DM,intent(inout) :: da
VecScatter :: cta
PetscInt ::xind,yind,zind,xwidth,ywidth,zwidth
!Debug:
!print*,"INFO BEFORE SCATTER:"
call DAinf(da,xind,yind,zind,xwidth,ywidth,zwidth)
print*, SIZE(array,1),'x',SIZE(array,2),'x',SIZE(array,3)
!
call DMCreateLocalVector(da,localv,ierr)
!The following if-block is an attempt to scatter the arrays as
vectors
!loaded from master node.
if (sizex/= 1)then
!PETSC Devs suggested:
!Buffer 3d array natural has diff ordering than DMDAs:
call DMDACreateNaturalVector(da,natural,ierr)
call DMDAVecGetArrayF90(da,natural,arrayp,ierr)
!**fill up veczero***:
!TODO
if(rank==0)then
arrayp = array(xind+1:xwidth,yind+1:ywidth,zind+1:zwidth)
endif
call DMDAVecRestoreArrayF90(da,natural,arrayp,ierr)
!call DMRestoreNaturalVector(da,natural,ierr) !???? not needed?
!Distributing array:
call VecScatterCreateToZero(natural,cta,veczero,ierr)
call
VecScatterBegin(cta,veczero,natural,INSERT_VALUES,SCATTER_REVERSE,ierr)
call
VecScatterEnd(cta,veczero,natural,INSERT_VALUES,SCATTER_REVERSE,ierr)
call DMCreateGlobalVector(da,globalv,ierr)
call DMDANaturalToGlobalBegin(da,natural,INSERT_VALUES,globalv,ierr)
call DMDANaturalToGlobalEnd(da,natural,INSERT_VALUES,globalv,ierr)
call DMGlobalToLocalBegin(da,globalv,INSERT_VALUES,localv,ierr)
call DMGlobalToLocalEnd(da,globalv,INSERT_VALUES,localv,ierr)
else
print*, "This should be called in serial only."
call DMDAVecGetArrayF90(da,localv,arrayp,ierr)
arrayp = array
call DMDAVecRestoreArrayF90(da,localv,arrayp,ierr)
call DMRestoreLocalVector(da,localv,ierr)
endif
call VecDestroy(globalv,ierr)
call VecScatterDestroy(cta,ierr)
call VecDestroy(natural,ierr)
END SUBROUTINE DAs
On Wed, Feb 15, 2017 at 7:31 PM, Barry Smith <bsmith at mcs.anl.gov> wrote:
>
> You can do this in the style of
>
> VecLoad_Binary_DA()
>
> first you take your sequential vector and make it parallel in the
> "natural" ordering for 3d arrays
>
> DMDACreateNaturalVector(da,&natural);
> VecScatterCreateToZero(natural,&scatter,&veczero); /* veczero is of
> full size on process 0 and has zero entries on all other processes*/
> /* fill up veczero */
> VecScatterBegin(scatter,veczero,natural,INSERT_VALUES,SCATTER_REVERSE);
> VecScatterEnd(scatter,veczero,natural,INSERT_VALUES,SCATTER_REVERSE);
>
> and then move it into the PETSc DMDA parallel ordering vector with
>
> ierr = DMCreateGlobalVector(da,&xin);CHKERRQ(ierr);
> ierr = DMDANaturalToGlobalBegin(da,natural,INSERT_VALUES,xin);
> CHKERRQ(ierr);
> ierr = DMDANaturalToGlobalEnd(da,natural,INSERT_VALUES,xin);
> CHKERRQ(ierr);
>
>
> > On Feb 15, 2017, at 7:16 PM, Manuel Valera <mvalera at mail.sdsu.edu>
> wrote:
> >
> > Hello,
> >
> > My question this time is just if there is a way to distribute a 3D array
> whos located at Zero rank over the processors, if possible using the DMDAs,
> i'm trying not to do a lot of initialization I/O in parallel.
> >
> > Thanks for your time,
> >
> > Manuel
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20170218/414e4452/attachment.html>
More information about the petsc-users
mailing list