[petsc-users] "Exact-ish" preallocation for (mostly) unstructured mesh with periodic boundary conditions?

Matthew Knepley knepley at gmail.com
Sun Mar 3 08:31:16 CST 2013


On Fri, Mar 1, 2013 at 12:50 PM, Ramsey, James J CIV (US) <
james.j.ramsey14.civ at mail.mil> wrote:

> I'm trying to get a better preallocation scheme. I noticed an earlier
> e-mail <
> https://lists.mcs.anl.gov/mailman/htdig/petsc-users/2010-September/006886.html>
> mentioning that "for finite elements with a normal cell division or finite
> differences with a normal vertex division, you need only count the nonzeros
> for rows you own since the partitions overlap on the boundary."
> Unfortunately, this doesn't quite work for me. Here's the latest attempt at
> a routine, with some details skipped:
>

I understand the code you have below, but we have chosen to do it a
complementary way.
The way I understand FEM allocation, schematically, you

  Loop over all global unknowns u_i (coefficient of phi_i)
    Loop over global unknowns u_j (coefficient of phi_j) which have nonzero
overlap with the function \phi_i
      Increment dnnz or onnz

We have code to do this in PETSc right now. You need to specify your
topology using DMPlex, and then just use DMCreateMatrix().
There is no getting around communication or ghosting when doing this
operation.

   Matt


> std::vector<PetscInt> d_nnz(nLocalRows), o_nnz(nLocalRows);
> std::vector<std::set<PetscInt> > d_nzInds(nLocalRows),
> o_nzInds(nLocalRows);
>
> for ( ... loop over elements ...) {
>
>    wrapNodeIds(*mesh, nIdVec);  // Accounts for periodic boundary
> conditions
>    mesh->nodes2globalDOF(nIdVec, globalDOFsPerElem);
>
>    std::vector<PetscInt>::iterator gdBegin = globalDOFsPerElem.begin();
>    std::vector<PetscInt>::iterator gdEnd = globalDOFsPerElem.end();
>
>    for (std::vector<PetscInt>::iterator itrI = gdBegin; itrI != gdEnd;
> ++itrI) {
>
>        PetscInt I = *itrI;
>
>        if ((I >= minDOFId) && (I <= maxDOFId)) {
>           PetscInt Ioffset = I - minDOFId;
>
>           for (std::vector<PetscInt>::iterator itrJ = gdBegin; itrJ !=
> gdEnd; ++itrJ) {
>
>             PetscInt J = *itrJ;
>
>             if ((J >= minDOFId) && (J <= maxDOFId)) {
>               // J will not be inserted if it is already in
> d_nzInds[Ioffset]. This avoids duplicates.
>               d_nzInds[Ioffset].insert(J);
>             }
>             else {
>               o_nzInds[Ioffset].insert(J);
>             }
>
>           }
>
>      }
> }
>
> for (PetscInt i = 0; i < nLocalRows; ++i) {
>       d_nnz[i] = d_nzInds[i].size();
>       o_nnz[i] = o_nzInds[i].size();
> }
>
> For nodes in the interior, this seems to work well, but at the boundaries
> of the domain, the number of non-zeros is underestimated. It seems like
> parallel communication is unavoidable here, but I'm not sure how to do it,
> and I'm unsure if I really want to mess with relatively low-level MPI
> routines to get what I want.
>
> Any ideas?




-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20130303/39d80b75/attachment.html>


More information about the petsc-users mailing list