[petsc-users] fieldsplit-preconditioner for block-system of linear equations

Eike Mueller E.Mueller at bath.ac.uk
Thu Oct 1 03:32:44 CDT 2015


Hi Matt,


thanks a lot for your answer. Assembling the matrices into one large matrix should be ok, since I have control over that.


But am I right in assuming that the vectors have to be stored as one large PETSc vector? So when X = (x^(1),x^(2),...,x^(k)), where x^(i) are the individual vectors of length n_i, then I have to store X as one PETSc vector of length n_1+n_2+...+n_k? In the data structure I use, storage for the vectors is already allocated, so currently I use

VecCreateSeqWithArray() to construct k separate PETSc vectors. Sounds like I can't do that, but have to create one big PETSc vector, and copy the data into that? That's ok, if it's the only option, since the overhead should not be too large, but I wanted to check before doing something unnecessary.


Thanks,


Eike


________________________________
From: Matthew Knepley <knepley at gmail.com>
Sent: 30 September 2015 15:27
To: Eike Mueller
Cc: petsc-users at mcs.anl.gov
Subject: Re: [petsc-users] fieldsplit-preconditioner for block-system of linear equations

On Wed, Sep 30, 2015 at 3:00 AM, Eike Mueller <E.Mueller at bath.ac.uk<mailto:E.Mueller at bath.ac.uk>> wrote:
Dear PETSc,

I am solving a linear system of equations

(1) A.X = B

where the vector X is divided into chunks, such that X= (x_1,x_2,...,x_k) and each of the vectors x_i has length n_i (same for the vector B=(b_1,b_2,...,b_k)). k=5 in my case, and n_i >> 1. This partitioning implies that the matrix has a block-structure, where the submatrices A_{ij} are of size n_i x n_j. So explicitly for k=3:

        A_{1,1}.x_1 + A_{1,2}.x_2 + A_{1,3}.x_3 = b_1
(2)    A_{2,1}.x_1 + A_{2,2}.x_2 + A_{2,3}.x_3 = b_2
        A_{3,1}.x_1 + A_{3,2}.x_2 + A_{3,3}.x_3 = b_3

I now want to solve this system with a Krylov-method (say, GMRES) and precondition it with a field-split preconditioner, such that the Schur-complement is formed in the 1-block. I know how to do this if I have assembled the big matrix A, and I store all x_i in one big vector X (I construct the index-sets corresponding to the vectors x_1 and (x_2,x_3), and then call PCFieldSplitSetIS()). This all works without any problems.

However, I now want to do this for an existing code which

1. Assembles the matrices A_{ij} separately (i.e. they are not stored as part of a big matrix A, but are stored in independent memory locations)
2. Stores the vectors x_i separately, i.e. they are not stored as parts of one big chunk of memory of size n_1+...+n_k

I can of course implement the matrix application via a matrix shell, but is there still an easy way of using the fieldsplit preconditioner?

>From your description, it sounds like you could use

  http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatGetLocalSubMatrix.html

to assemble your separate matrices directly into a large matrix. Furthermore, if it really benefits you
to have separate memory, you can change the matrix type from MATIJ to MATNEST dynamically,
but none of your code would change. This would let you check things with LU, but still optimize when
necessary.

  Thanks,

      Matt

The naive way I can think of is to allocate a new big matrix A, and copy the A_{ij} into the corresponding blocks. Then allocate big vectors x and b, and also copy in/out the data before and after the solve. This, however, seems to be wasteful, so before I go down this route I wanted to double check if there is a way around it, since this seems to be a very common problem?

Thanks a lot,

Eike




--
What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.
-- Norbert Wiener
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20151001/9018312a/attachment.html>


More information about the petsc-users mailing list