[petsc-users] About DMDA (and extracting its ordering)
Thibaut Appel
t.appel17 at imperial.ac.uk
Thu Feb 21 10:09:44 CST 2019
Dear PETSc developers/users,
I’m solving linear PDEs on a regular grid with high-order finite
differences, assembling an MPIAIJ matrix to solve linear systems or
eigenvalue problems. I’ve been using vertex major, natural ordering for
the parallelism with PetscSplitOwnership (yielding rectangular slices of
the physical domain) and wanted to move to DMDA to have a more
square-ish domain decomposition and minimize communication between
processes.
However, my application is memory critical, and I have finely-tuned
matrix preallocation routines for allocating memory “optimally”. It
seems the memory of a DMDA matrix is allocated along the value of the
stencil width of DMDACreate and the manual says about it
“These DMDA stencils have nothing directly to do with any finite
difference stencils one might chose to use for a discretization”
And despite reading the manual pages there must be something I do not
understand in the DM topology, what is that "stencil width" for then? I
will not use ghost values for my FD-method, right?
I was then wondering if I could just create a MPIAIJ matrix, and with a
PETSc routine get the global indices of the domain for each process: in
other words, an equivalent of PetscSplitOwnership that gives me the DMDA
unknown ordering. So I can feed and loop on that in my preallocation and
assembly routines.
Thanks very much,
Thibaut
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20190221/80caa28e/attachment.html>
More information about the petsc-users
mailing list