[petsc-users] Multiple distributed arrays having equivalent partitioning but different stencil widths

Anton popov at uni-mainz.de
Mon Nov 28 17:36:42 CST 2016


Jason,

I guess you can let PETSc partition the first DMDA, then access the 
partitioning data using DMDAGetOwnershipRanges, and pass this to 
subsequent calls to DMDACreat3D via lx, ly, lz parameters.

Thanks,

Anton

On 11/29/16 12:30 AM, Jason Lefley wrote:
> I’m developing an application and used PETSc’s DMDA functionality to perform distributed computation on a cartesian grid. I encountered a situation where I need two distributed arrays with equivalent partitioning but each having a different stencil width.
>
> Right now I accomplish this through multiple calls to DMDACreate3d(), however I found that for some combinations of global array dimensions and number of processors, the multiple calls to DMDACreate3d() result in partition sizes that differ by a single cell when comparing the partitioning of the two distributed arrays. The application cannot run if this happens because a particular process needs access to the same grid locations in both of the distributed arrays.
>
> Is it possible to use PETSc’s cartesian grid partitioning functionality to perform a single partitioning of the domain and then set up the two distributed arrays using that partitioning in combination with different stencil widths? I looked at the source code that performs the partitioning but did not see an obvious way to use it in the capacity I describe. I think I could use lower level calls in PETSc and some other partitioning algorithm if necessary but I want to make sure that there is no way to do this using the built-in partitioning functionality.
>
> Thanks



More information about the petsc-users mailing list