SeqSBAIJ with cholesky

Hong Zhang hzhang at
Tue Oct 31 10:16:39 CST 2006

> I used MatCreateSeqSBAIJ to create the sbaij matrix. Does this mean that the
> entire matrix is stored under one processor? Does the 'seq' in

Yes. If you use more than one processors, each processor creates
its own entire matrix, which can have different sizes and values.

> MatCreateSeqSBAIJ stand for sequential, meaning "not parallel"?

> If that is the case, can you tell me how I should write it so that, it
> creates a parallel matrix if I'm using more than one processor(that would be
> more efficient, right?), and create a sequential matrix if I am using only
> one processor?

You may look at example
~petsc/src/ksp/ksp/examples/tutorials/ex5.c on how to create a mpi matrix.
The default matrix format is aij. You can use runtime option
'-mat_type sbaij -mat_ignore_lower_triangular'
calling MatSetType(sA,MATSBAIJ) in your code
to set sbaij format.
In this case, all processors jointly create a single parallel matrix
with block rows distributed into each processor.

> >
> > The matrix reordering for sbaij matrix requires reordering of
> > entire matrix data structure (rows and columns!), causing
> > wild data accessing and communication, and only gains saving
> > half of the original sparse matrix entries.
> >             ^^^^^^^^^^^^^^^^^^^^^
> > Using aij matrix format, the matrix reordering can be done eficiently.
> > We provide icc and cholesky matrix factorizations for aij
> > format, e.g., preconditioners are stored in sbaij format
> > (half of matrix entries).
> > Note, matrix factors are in general denser than the original
> > sparse matrix.
> I will check out using the aij matrix but having to store almost double the
> matrix entries seem inefficient to me.

you only store entire original sparse matrix,
its nonzeros is much less than nozeros in the matrix factorizations or
preconditioners in general. Using icc or cholesky matrix factorization,
you save half of storage and efficient data retrieval.

> I just took a quick look at the MUMPS solver. It seems like they do have
> support for distributed direct solving of symmetric matrices with ordering.
> So, this kind of thing has been done before.. But I don't know about the
> efficiency because I haven't use MUMPS before.. But I do have a symmetric
> direct solver (using 1 processor) in my fem code that uses ordering and
> right now it is a LOT faster that the symmetric solver in petsc. I'm
> guessing that is because petsc does not support reordering.

Matrix reordering can be done for symmetric matrix. We took out its
support because the better and efficient way is using

aij matrix + matrix ordering + symmetric preconditioner
(icc or cholesky).

I just came back from visiting MUMPS group in France (this is why I
delayed this email). Although mumps allow user to
input half of symmetric matrix distributively,
it first collects matrix structure (row and column indices) into
centralized host process, does analysis and reordering, then
reorganizes user matrix values into its own data structure -
lots of internal operations!
Saving half of your matrix storage doesn't lead to overall saving.
You may contact MUMPs developer on details of their symmetric solver.
If you want use parallel direct solver, mumps is a good choice.
Note, petsc is interfaced with mumps.
> > icc/cholesky preconditioners save half of storage than ilu/lu
> > preconditioner and might be faster. If your have symmetric
> > matrix, use aij format for your matrix, and apply
> > icc/cholesky preconditioner.
> > You can apply any symmetric ordering efficiently to the aij
> > matrix to reduce the fillings in matrix factorization.
I've repeated this view.

> Can you please clarify that? What do you mean by symmetric ordering?
Let Pr and Pc be row and column orderings.
We solve (Pr* A * Pc^{-1}) * Pc * x = Pr* rhs in general.
If you have a symmetric matrix and want to use symmetric
preconditioner/solver, you must maintain a symmetrix matrix after
matrix ordering, i.e., applying permutation matrix P to

PAP^{-1} P * x = P * rhs.
Since P is a permutation matrix, it satisfies
P^{-1} = P^T, thus, PAP^{-1} remains symmetric.

>I would certainly prefer symmetric solvers that can do reordering. But
>let you know what I come up with when I use aij with cholesky.

Matrix reordering reduces fill-ins in icc/cholesky factorization
and facilitates efficient data accessing - very important for
sparse matrix computation.
Using aij matrix enables user to have effient matrix ordering,
while retain the space savings and fast data accessing
during preconditioning and solve.
We have investigated matrx ordering schemes (including
well-established scheme)
on symmtric matrices stored in half.
Saving half of user's input doesn't lead to overal saving of spaces.


More information about the petsc-users mailing list