[petsc-users] A way to distribute 3D arrays.
Matthew Knepley
knepley at gmail.com
Sat Feb 18 15:03:04 CST 2017
On Sat, Feb 18, 2017 at 2:54 PM, Manuel Valera <mvalera at mail.sdsu.edu>
wrote:
> Is very easy Matthew, im porting a serial code into a parallel petsc based
> one and i want to do it in the most natural straightforward way, also im
> learning petsc at the same time.
>
> So the original code uses staggered grid and every variable is stored in a
> separate array, so i created a DA for each common array of them but they
> differ in just +/-1 to +/-3 in each dimension.
>
> So i guess i should allocate all of them in the same DA and just ignore
> the extra non used points for some arrays, right?
>
I have seen many staggered grid implementation in PETSc, but the most
successful have been those that used a single DA for everything,
and just allowed some points at the boundary to unused. I would advocate
this, but before you do it I would make a detailed drawing of the
3 layers around a corner, showing all unknowns and marking them with their
indices. This is invaluable when writing the equations.
Thanks,
Matt
> Regards,
>
> Manuel
>
> On Sat, Feb 18, 2017 at 12:49 PM, Matthew Knepley <knepley at gmail.com>
> wrote:
>
>> On Sat, Feb 18, 2017 at 2:44 PM, Manuel Valera <mvalera at mail.sdsu.edu>
>> wrote:
>>
>>> thanks guys that helped a lot!
>>>
>>> I think i got it know, i copy the code i created in case you want to
>>> suggest something or maybe use it as example...
>>>
>>> I have a bonus question: Im going to operate next in several arrays at
>>> the same time, i created them using their slightly different layouts, one
>>> DA each, a peer told me different DA are not guaranteed to share the same
>>> corners, is this correct? if so, is there a way to enforce that? Thanks
>>>
>>
>> I do not understand. Do these DAs have the same size, meaning number of
>> vertices in X, Y, and Z? If so, why are you
>> using separate arrays. You can just use 'dof' to put all your data there,
>> and strip out subvectors using VecStride
>> functions, such as
>>
>> http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpage
>> s/Vec/VecStrideScatter.html
>>
>> If the DAs are different sizes, how would they ever have the same corners.
>>
>> Thanks,
>>
>> Matt
>>
>>
>>> SUBROUTINE DAs(da,veczero,localv,array)
>>>
>>> ! use PetscObjects, only :: ierr
>>>
>>> ! Umbrella program to update and communicate the arrays in a
>>> ! distributed fashion using the DMDA objects from PETSc.
>>> ! Manuel Valera 1/20/17
>>>
>>> ! Arguments:
>>> ! da = DMDA array (3D) already created and setup
>>> ! veczero =
>>> ! globalv =
>>> ! localv = local chunk each processor works in.
>>> ! array = the array to be petscified. ONLY 3D ARRAYS as now.
>>> ! arrayp = the petsc version of the array, may be not needed.
>>> ! cta = the scatter context to translate globalv to localv in DA
>>> indices
>>>
>>> Vec,intent(inout) :: veczero,localv
>>> Vec :: natural,globalv
>>> PetscScalar,dimension(:,:,:) :: array
>>> PetscScalar,pointer,dimension(:,:,:) :: arrayp
>>> DM,intent(inout) :: da
>>> VecScatter :: cta
>>> PetscInt ::xind,yind,zind,xwidth,ywidt
>>> h,zwidth
>>>
>>>
>>>
>>> !Debug:
>>> !print*,"INFO BEFORE SCATTER:"
>>> call DAinf(da,xind,yind,zind,xwidth,ywidth,zwidth)
>>> print*, SIZE(array,1),'x',SIZE(array,2),'x',SIZE(array,3)
>>> !
>>>
>>> call DMCreateLocalVector(da,localv,ierr)
>>>
>>> !The following if-block is an attempt to scatter the arrays as
>>> vectors
>>> !loaded from master node.
>>>
>>> if (sizex/= 1)then
>>> !PETSC Devs suggested:
>>> !Buffer 3d array natural has diff ordering than DMDAs:
>>> call DMDACreateNaturalVector(da,natural,ierr)
>>> call DMDAVecGetArrayF90(da,natural,arrayp,ierr)
>>> !**fill up veczero***:
>>> !TODO
>>> if(rank==0)then
>>> arrayp = array(xind+1:xwidth,yind+1:ywidth,zind+1:zwidth)
>>> endif
>>> call DMDAVecRestoreArrayF90(da,natural,arrayp,ierr)
>>> !call DMRestoreNaturalVector(da,natural,ierr) !???? not
>>> needed?
>>>
>>> !Distributing array:
>>> call VecScatterCreateToZero(natural,cta,veczero,ierr)
>>> call VecScatterBegin(cta,veczero,na
>>> tural,INSERT_VALUES,SCATTER_REVERSE,ierr)
>>> call VecScatterEnd(cta,veczero,natu
>>> ral,INSERT_VALUES,SCATTER_REVERSE,ierr)
>>>
>>>
>>> call DMCreateGlobalVector(da,globalv,ierr)
>>> call DMDANaturalToGlobalBegin(da,na
>>> tural,INSERT_VALUES,globalv,ierr)
>>> call DMDANaturalToGlobalEnd(da,natu
>>> ral,INSERT_VALUES,globalv,ierr)
>>>
>>> call DMGlobalToLocalBegin(da,globalv,INSERT_VALUES,localv,ierr)
>>> call DMGlobalToLocalEnd(da,globalv,INSERT_VALUES,localv,ierr)
>>>
>>> else
>>>
>>> print*, "This should be called in serial only."
>>>
>>> call DMDAVecGetArrayF90(da,localv,arrayp,ierr)
>>>
>>> arrayp = array
>>>
>>> call DMDAVecRestoreArrayF90(da,localv,arrayp,ierr)
>>> call DMRestoreLocalVector(da,localv,ierr)
>>> endif
>>>
>>> call VecDestroy(globalv,ierr)
>>> call VecScatterDestroy(cta,ierr)
>>> call VecDestroy(natural,ierr)
>>>
>>>
>>>
>>> END SUBROUTINE DAs
>>>
>>>
>>>
>>> On Wed, Feb 15, 2017 at 7:31 PM, Barry Smith <bsmith at mcs.anl.gov> wrote:
>>>
>>>>
>>>> You can do this in the style of
>>>>
>>>> VecLoad_Binary_DA()
>>>>
>>>> first you take your sequential vector and make it parallel in the
>>>> "natural" ordering for 3d arrays
>>>>
>>>> DMDACreateNaturalVector(da,&natural);
>>>> VecScatterCreateToZero(natural,&scatter,&veczero); /* veczero is of
>>>> full size on process 0 and has zero entries on all other processes*/
>>>> /* fill up veczero */
>>>> VecScatterBegin(scatter,veczero,natural,INSERT_VALUES,SCATT
>>>> ER_REVERSE);
>>>> VecScatterEnd(scatter,veczero,natural,INSERT_VALUES,SCATTER_REVERSE);
>>>>
>>>> and then move it into the PETSc DMDA parallel ordering vector with
>>>>
>>>> ierr = DMCreateGlobalVector(da,&xin);CHKERRQ(ierr);
>>>> ierr = DMDANaturalToGlobalBegin(da,natural,INSERT_VALUES,xin);CHKER
>>>> RQ(ierr);
>>>> ierr = DMDANaturalToGlobalEnd(da,natural,INSERT_VALUES,xin);CHKERRQ
>>>> (ierr);
>>>>
>>>>
>>>> > On Feb 15, 2017, at 7:16 PM, Manuel Valera <mvalera at mail.sdsu.edu>
>>>> wrote:
>>>> >
>>>> > Hello,
>>>> >
>>>> > My question this time is just if there is a way to distribute a 3D
>>>> array whos located at Zero rank over the processors, if possible using the
>>>> DMDAs, i'm trying not to do a lot of initialization I/O in parallel.
>>>> >
>>>> > Thanks for your time,
>>>> >
>>>> > Manuel
>>>>
>>>>
>>>
>>
>>
>> --
>> What most experimenters take for granted before they begin their
>> experiments is infinitely more interesting than any results to which their
>> experiments lead.
>> -- Norbert Wiener
>>
>
>
--
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20170218/4efe2827/attachment.html>
More information about the petsc-users
mailing list