zero out columns

Dave May dave.mayhem23 at gmail.com
Fri Jan 18 08:56:32 CST 2008


For some finite element codes I've written in the past I have wanted to
leave the constraints in the matrix. Since the global FE stiffness is
assembled from element (local) contributions, a masking matrix used to zero
out entries can also be applied on the element level as well. I would
assemble the full matrix, then apply the mask. I found this approach
efficient enough (provided you used MAT_FLUSH_ASSEMBLY when switching from
ADD_VALUES to INSERT_VALUES) as you are not generating any new non entires
within the matrix sparsity pattern.

Cheers,
    Dave.

On Jan 18, 2008 9:32 PM, Toby D. Young <tyoung at ippt.gov.pl> wrote:

>
>
>
> Hello users,
>
> The procedure I wish to program with petsc requires setting a series of
> matrix elements to zero, and for a given number of rows and columns.
> The operation is intended to run for large parallel sparse matrices.
>
> Using MatZeroRowsIS() I can easily achieve zeroing out the rows I want
> and now I am wondering how to zero out the columns. I imagine there are
> a few other ways of doing this, I've come up with two:
>
>
> Method 1
> MatGetColumnIJ()     // get a specified column
> VecSetValues()       // set wanted indices to zero
> MatRestoreColumnIJ() // restore vector to the matrix
>
> The Petsc manual warns "Since PETSc matrices are usually stored in
> compressed row format, this routine will generally be
> slow." (of MatGetColumnIJ).
>
> Method 2
> MatTranspose()       // Take the transpose of the matrix
> MatZeroRowsIS()      // Zeroi out the row which was the target column
>
> This requires creating a new matrix, i.e. the transpose and then
> releasing memory from the old matrix. The advantage is that it is
> relatively straightforward to code up.
>
> The rather obvious third way is to simply force the column elements to
> be zero with MatSetValues() which, I am guessing, is not likely to be
> efficient and I am not really considering this.
>
> I am wondering if anyone has a clue as to which of the methods above is
> likely to be more efficient for large parallel sparse matrices. Perhaps
> someone may have a suggesttion for an alternative approach.
>
> Thanks in advance.
>
> Best,
>        Toby
>
>
>
>
> --
>
> Toby D. Young - Adiunkt (Assistant Professor)
> Department of Computational Science
> Institute of Fundamental Technological Research
> Polish Academy of Sciences
> Room 206, Swietokrzyska 21
> 00-049 Warsaw, POLAND
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20080119/fd9bebb5/attachment.htm>


More information about the petsc-users mailing list