<div dir="ltr">thanks guys that helped a lot! <div><br></div><div>I think i got it know, i copy the code i created in case you want to suggest something or maybe use it as example...</div><div><br></div><div>I have a bonus question: Im going to operate next in several arrays at the same time, i created them using their slightly different layouts, one DA each, a peer told me different DA are not guaranteed to share the same corners, is this correct? if so, is there a way to enforce that? Thanks</div><div><br></div><div><br></div><div><div>SUBROUTINE DAs(da,veczero,localv,array)</div><div><br></div><div> ! use PetscObjects, only :: ierr</div><div><br></div><div> ! Umbrella program to update and communicate the arrays in a</div><div> ! distributed fashion using the DMDA objects from PETSc.</div><div> ! Manuel Valera 1/20/17</div><div><br></div><div> ! Arguments:</div><div> ! da = DMDA array (3D) already created and setup </div><div> ! veczero = </div><div> ! globalv = </div><div> ! localv = local chunk each processor works in.</div><div> ! array = the array to be petscified. ONLY 3D ARRAYS as now. </div><div> ! arrayp = the petsc version of the array, may be not needed.</div><div> ! cta = the scatter context to translate globalv to localv in DA indices</div><div><br></div><div> Vec,intent(inout) :: veczero,localv</div><div> Vec :: natural,globalv</div><div> PetscScalar,dimension(:,:,:) :: array</div><div> PetscScalar,pointer,dimension(:,:,:) :: arrayp</div><div> DM,intent(inout) :: da</div><div> VecScatter :: cta</div><div> PetscInt ::xind,yind,zind,xwidth,ywidth,zwidth</div><div><br></div><div><br></div><div><br></div><div> !Debug:</div><div> !print*,"INFO BEFORE SCATTER:" </div><div> call DAinf(da,xind,yind,zind,xwidth,ywidth,zwidth)</div><div> print*, SIZE(array,1),'x',SIZE(array,2),'x',SIZE(array,3)</div><div> !</div><div><br></div><div> call DMCreateLocalVector(da,localv,ierr)</div><div><br></div><div> !The following if-block is an attempt to scatter the arrays as vectors</div><div> !loaded from master node.</div><div><br></div><div> if (sizex/= 1)then</div><div> !PETSC Devs suggested:</div><div> !Buffer 3d array natural has diff ordering than DMDAs:</div><div> call DMDACreateNaturalVector(da,natural,ierr)</div><div> call DMDAVecGetArrayF90(da,natural,arrayp,ierr)</div><div> !**fill up veczero***:</div><div> !TODO</div><div> if(rank==0)then</div><div> arrayp = array(xind+1:xwidth,yind+1:ywidth,zind+1:zwidth)</div></div><div><div> endif</div><div> call DMDAVecRestoreArrayF90(da,natural,arrayp,ierr)</div><div> !call DMRestoreNaturalVector(da,natural,ierr) !???? not needed?</div><div><br></div><div> !Distributing array:</div><div> call VecScatterCreateToZero(natural,cta,veczero,ierr)</div><div> call VecScatterBegin(cta,veczero,natural,INSERT_VALUES,SCATTER_REVERSE,ierr)</div><div> call VecScatterEnd(cta,veczero,natural,INSERT_VALUES,SCATTER_REVERSE,ierr)</div><div><br></div><div><br></div><div> call DMCreateGlobalVector(da,globalv,ierr)</div><div> call DMDANaturalToGlobalBegin(da,natural,INSERT_VALUES,globalv,ierr)</div><div> call DMDANaturalToGlobalEnd(da,natural,INSERT_VALUES,globalv,ierr)</div><div><br></div><div> call DMGlobalToLocalBegin(da,globalv,INSERT_VALUES,localv,ierr)</div><div> call DMGlobalToLocalEnd(da,globalv,INSERT_VALUES,localv,ierr)</div><div><br></div><div> else</div><div><br></div><div> print*, "This should be called in serial only."</div><div><br></div><div> call DMDAVecGetArrayF90(da,localv,arrayp,ierr)</div><div><br></div><div> arrayp = array</div><div><br></div><div> call DMDAVecRestoreArrayF90(da,localv,arrayp,ierr)</div><div> call DMRestoreLocalVector(da,localv,ierr)</div><div> endif</div><div><br></div><div> call VecDestroy(globalv,ierr)</div><div> call VecScatterDestroy(cta,ierr)</div><div> call VecDestroy(natural,ierr)</div><div><br></div><div><br></div><div><br></div><div> END SUBROUTINE DAs</div></div><div><br></div><div><br></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Feb 15, 2017 at 7:31 PM, Barry Smith <span dir="ltr"><<a href="mailto:bsmith@mcs.anl.gov" target="_blank">bsmith@mcs.anl.gov</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><br>
You can do this in the style of<br>
<br>
VecLoad_Binary_DA()<br>
<br>
first you take your sequential vector and make it parallel in the "natural" ordering for 3d arrays<br>
<br>
DMDACreateNaturalVector(da,&<wbr>natural);<br>
VecScatterCreateToZero(<wbr>natural,&scatter,&veczero); /* veczero is of full size on process 0 and has zero entries on all other processes*/<br>
/* fill up veczero */<br>
VecScatterBegin(scatter,<wbr>veczero,natural,INSERT_VALUES,<wbr>SCATTER_REVERSE);<br>
VecScatterEnd(scatter,veczero,<wbr>natural,INSERT_VALUES,SCATTER_<wbr>REVERSE);<br>
<br>
and then move it into the PETSc DMDA parallel ordering vector with<br>
<br>
ierr = DMCreateGlobalVector(da,&xin);<wbr>CHKERRQ(ierr);<br>
ierr = DMDANaturalToGlobalBegin(da,<wbr>natural,INSERT_VALUES,xin);<wbr>CHKERRQ(ierr);<br>
ierr = DMDANaturalToGlobalEnd(da,<wbr>natural,INSERT_VALUES,xin);<wbr>CHKERRQ(ierr);<br>
<div class="HOEnZb"><div class="h5"><br>
<br>
> On Feb 15, 2017, at 7:16 PM, Manuel Valera <<a href="mailto:mvalera@mail.sdsu.edu">mvalera@mail.sdsu.edu</a>> wrote:<br>
><br>
> Hello,<br>
><br>
> My question this time is just if there is a way to distribute a 3D array whos located at Zero rank over the processors, if possible using the DMDAs, i'm trying not to do a lot of initialization I/O in parallel.<br>
><br>
> Thanks for your time,<br>
><br>
> Manuel<br>
<br>
</div></div></blockquote></div><br></div>