[petsc-users] "Exact-ish" preallocation for (mostly) unstructured mesh with periodic boundary conditions?
Jed Brown
jedbrown at mcs.anl.gov
Fri Mar 1 14:43:42 CST 2013
On Fri, Mar 1, 2013 at 12:50 PM, Ramsey, James J CIV (US) <
james.j.ramsey14.civ at mail.mil> wrote:
> I'm trying to get a better preallocation scheme. I noticed an earlier
> e-mail <
> https://lists.mcs.anl.gov/mailman/htdig/petsc-users/2010-September/006886.html>
> mentioning that "for finite elements with a normal cell division or finite
> differences with a normal vertex division, you need only count the nonzeros
> for rows you own since the partitions overlap on the boundary."
> Unfortunately, this doesn't quite work for me.
I can't tell what your question is. It appears that you are not considering
elements in the overlap. If you don't have any overlap in your mesh, you
need to communicate.
http://lists.mcs.anl.gov/pipermail/petsc-users/2008-May/003020.html
> Here's the latest attempt at a routine, with some details skipped:
>
> std::vector<PetscInt> d_nnz(nLocalRows), o_nnz(nLocalRows);
> std::vector<std::set<PetscInt> > d_nzInds(nLocalRows),
> o_nzInds(nLocalRows);
>
> for ( ... loop over elements ...) {
>
> wrapNodeIds(*mesh, nIdVec); // Accounts for periodic boundary
> conditions
> mesh->nodes2globalDOF(nIdVec, globalDOFsPerElem);
>
> std::vector<PetscInt>::iterator gdBegin = globalDOFsPerElem.begin();
> std::vector<PetscInt>::iterator gdEnd = globalDOFsPerElem.end();
>
> for (std::vector<PetscInt>::iterator itrI = gdBegin; itrI != gdEnd;
> ++itrI) {
>
> PetscInt I = *itrI;
>
> if ((I >= minDOFId) && (I <= maxDOFId)) {
> PetscInt Ioffset = I - minDOFId;
>
> for (std::vector<PetscInt>::iterator itrJ = gdBegin; itrJ !=
> gdEnd; ++itrJ) {
>
> PetscInt J = *itrJ;
>
> if ((J >= minDOFId) && (J <= maxDOFId)) {
> // J will not be inserted if it is already in
> d_nzInds[Ioffset]. This avoids duplicates.
> d_nzInds[Ioffset].insert(J);
> }
> else {
> o_nzInds[Ioffset].insert(J);
> }
>
> }
>
> }
> }
>
> for (PetscInt i = 0; i < nLocalRows; ++i) {
> d_nnz[i] = d_nzInds[i].size();
> o_nnz[i] = o_nzInds[i].size();
> }
>
> For nodes in the interior, this seems to work well, but at the boundaries
> of the domain, the number of non-zeros is underestimated. It seems like
> parallel communication is unavoidable here, but I'm not sure how to do it,
> and I'm unsure if I really want to mess with relatively low-level MPI
> routines to get what I want.
>
> Any ideas?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20130301/7e18356e/attachment.html>
More information about the petsc-users
mailing list