[petsc-users] Multiple distributed arrays having equivalent partitioning but different stencil widths

Jason Lefley jason.lefley at aclectic.com
Mon Nov 28 17:30:02 CST 2016


I’m developing an application and used PETSc’s DMDA functionality to perform distributed computation on a cartesian grid. I encountered a situation where I need two distributed arrays with equivalent partitioning but each having a different stencil width.

Right now I accomplish this through multiple calls to DMDACreate3d(), however I found that for some combinations of global array dimensions and number of processors, the multiple calls to DMDACreate3d() result in partition sizes that differ by a single cell when comparing the partitioning of the two distributed arrays. The application cannot run if this happens because a particular process needs access to the same grid locations in both of the distributed arrays.

Is it possible to use PETSc’s cartesian grid partitioning functionality to perform a single partitioning of the domain and then set up the two distributed arrays using that partitioning in combination with different stencil widths? I looked at the source code that performs the partitioning but did not see an obvious way to use it in the capacity I describe. I think I could use lower level calls in PETSc and some other partitioning algorithm if necessary but I want to make sure that there is no way to do this using the built-in partitioning functionality.

Thanks


More information about the petsc-users mailing list