[petsc-users] nnz's in finite element stiffness matrix
Matthew Knepley
knepley at gmail.com
Wed Sep 8 01:29:00 CDT 2010
On Wed, Sep 8, 2010 at 2:28 AM, Dave Makhija <makhijad at colorado.edu> wrote:
> I used to have a decent setup that worked as follows:
>
> 1. Build node->element table (For a given node, which elements contain
> this node. You may already have this) OR build a node connectivity
> table (For a given node, which nodes are connected).
> 2. Build element->node table (For a given element, which nodes are
> contained in this element. You probably already have this)
> 4. Loop over nodes and get the global DOFs contained in that node. For
> those DOF id rows, add the number of nodal DOFs for each unique node
> connected to the current node using the node->element and
> element->node table OR the node connectivity table.
> 5. Loop over elements and add the number of elemental DOFs to each
> nodal DOF global id row contained in this element using element->node
> table. Also add the number of elemental DOF's and the sum of the nodal
> DOF's to the elemental global DOF id rows.
> 6. Add contributions of multi-point constraints, i.e. Lagrange multiplier
> DOF's.
>
> Note that in parallel you may have to scatter values to global DOF ids
> owned by off-processors to get an accurate Onz. This setup as a whole
> can be pretty fast but can scale poorly if you don't have a good way
> of developing the node-element table or node connectivity since it
> requires some loops within loops.
>
> Another way is to use the PETSc preallocation macros such as
> MatPreallocateSet. You can essentially do a "dry run" of a Jacobian
> Matrix assembly into the preallocation macros. They can be tricky to
> use, so if you have problems you can simply look at the PETSc
> documentation for those macros and hand code them yourself. This
> strategy will overestimate the memory, but a matrix inversion will
> dwarf how much this will waste. I vaguely remember a PETSc archive
> asking how to free the unneeded memory if this is absolutely
> necessary, but I don't think anything really worked without a full
> matrix copy. If someone by chance knows how the Trilinos
> "OptimizeStorage" routine works for Epetra matricies they could
> potentially shed some light on how to do this - if it is even
> possible.
If you can assemble the Jacobian, you can count the nonzeros. The only
subtlety here is nonzeros coming from other processes. However, for finite
elements with a normal cell division or finite differences with a normal
vertex division, you need only count the nonzeros for rows you own since
the partitions overlap on the boundary. Distinguishing the diagonal from
off-diagonal block is easy since you know the rows you own. The whole
process takes a vanishing small time compared to assembly since you are
not constructing the values.
Matt
> Dave Makhija
>
>
>
> On Tue, Sep 7, 2010 at 2:24 PM, stali <stali at purdue.edu> wrote:
> > Petsc-users
> >
> > How can I efficiently calculate the _exact_ number of non-zeros that
> would
> > be in the global sparse (stiffness) matrix given an unstructured mesh?
> >
> > Thanks
> >
>
--
What most experimenters take for granted before they begin their experiments
is infinitely more interesting than any results to which their experiments
lead.
-- Norbert Wiener
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20100908/6e699a86/attachment.htm>
More information about the petsc-users
mailing list