[petsc-users] MatPtAP

Samuel Lanthaler s.lanthaler at gmail.com
Mon Jun 4 07:56:36 CDT 2018


Ok, I now realize that I had implemented the boundary conditions in an 
unnecessarily complicated way... As you pointed out, I can just 
manipulate individual matrix rows to enforce the BC's. In that way, I 
never have to call MatPtAP, or do any expensive operations. Probably 
that's what is commonly done. I've changed my code, and it seems to work 
fine and is much faster.

Thanks a lot for your help, everyone! This has been very educational for me.

Samuel


On 06/01/2018 09:04 PM, Hong wrote:
> Samuel,
> I have following questions:
> 1) Why solving \lambda*Pt*B*P*y=Pt*A*P*y is better than solving 
> original \lambda*B*x = A*x?
>
> 2) Does your eigen solver require matrix factorization? If not, i.e., 
> only uses mat-vec multiplication, then you may implement
> z = (Pt*(A*(P*y))) in your eigensolver instead of using mat-mat-mat 
> multiplication.
>
> 3) petsc PtAP() was implemented for multigrid applications, in which 
> the product C = PtAP is a denser but a much smaller matrix.
> I have not seen the use of your case, that P is square with same size 
> as A. If C is much denser than A, then PtAP consumes a large portion 
> of time is anticipated.
>
> Hong
>
> On Fri, Jun 1, 2018 at 12:35 PM, Smith, Barry F. <bsmith at mcs.anl.gov 
> <mailto:bsmith at mcs.anl.gov>> wrote:
>
>
>       Could you save the P matrix with MatView() using a binary viewer
>     and the A matrix with MatView() and the binary viewer and email
>     them to petsc-maint at mcs.anl.gov <mailto:petsc-maint at mcs.anl.gov> ?
>     Then we can run the code in the profiler with your matrices and
>     see if there is any way to speed up the computation.
>
>        Barry
>
>
>     > On Jun 1, 2018, at 11:07 AM, Samuel Lanthaler
>     <s.lanthaler at gmail.com <mailto:s.lanthaler at gmail.com>> wrote:
>     >
>     > On 06/01/2018 03:42 PM, Matthew Knepley wrote:
>     >> On Fri, Jun 1, 2018 at 9:21 AM, Samuel Lanthaler
>     <s.lanthaler at gmail.com <mailto:s.lanthaler at gmail.com>> wrote:
>     >> Hi,
>     >>
>     >> I was wondering what the most efficient way to use MatPtAP
>     would be in the following situation: I am discretizing a PDE
>     system. The discretization yields a matrix A that has a band
>     structure (with k upper and lower bands, say). In order to
>     implement the boundary conditions, I use a transformation matrix P
>     which is essentially the unit matrix, except for the entries
>     P_{ij} where i,j<k and n-i,n-j<k, so
>     >>
>     >> P =  [ B, 0, 0, 0, ..., 0, 0 ]
>     >>        [  0, 1, 0, 0, ..., 0, 0 ]
>     >>        [                              ]
>     >>        [                              ]
>     >>        [                  ..., 1, 0 ]
>     >>        [  0, 0, 0, 0, ..., 0, C ]
>     >>
>     >> with B,C are (k-by-k) matrices.
>     >> Right now, I'm simply constructing A, P and calling
>     >>
>     >>     CALL
>     MatPtAP(petsc_matA,petsc_matP,MAT_INITIAL_MATRIX,PETSC_DEFAULT_REAL,petsc_matPtAP,ierr)
>     >>
>     >> where I haven't done anything to pestc_matPtAP, prior to this
>     call. Is this the way to do it?
>     >>
>     >> I'm asking because, currently, setting up the matrices A and P
>     takes very little time, whereas the operation MatPtAP is taking
>     quite long, which seems very odd... The matrices are of type
>     MPIAIJ. In my problem, the total matrix dimension is around 10'000
>     and the matrix blocks (B,C) are of size ~100.
>     >>
>     >> Are you sure this is what you want to do? Usually BC are local,
>     since by definition PDE are local, and
>     >> are applied pointwise. What kind of BC do you have here?
>     >>
>     >
>     > The boundary conditions are a mixture of Dirichlet and Neumann;
>     in my case, the PDE is a system involving 8 variables on a disk,
>     where the periodic direction is discretized using a Fourier series
>     expansion, the radial direction uses B-splines.
>     >
>     > In reality, I have two matrices A,B, and want to solve the
>     eigenvalue problem \lambda*B*x = A*x.
>     > I found it quite convenient to use a transformation P to a
>     different set of variables y, such that x=P*y and x satisfies the
>     BC iff certain components of y are 0. The latter is enforced by
>     inserting spurious eigenvalues at the relevant components of y in
>     the transformed eigenvalue problem \lambda*Pt*B*P*y=Pt*A*P*y.
>     After solving the EVP in terms of y, I get back x=P*y.
>     > Is this an inherently bad/inefficient way of enforcing BC's? Thanks.
>     >
>     >
>     >
>     >
>     >>   Thanks,
>     >>
>     >>     Matt
>     >>
>     >> Thanks in advance for any ideas.
>     >>
>     >> Cheers,
>     >> Samuel
>     >>
>     >>
>     >>
>     >> --
>     >> What most experimenters take for granted before they begin
>     their experiments is infinitely more interesting than any results
>     to which their experiments lead.
>     >> -- Norbert Wiener
>     >>
>     >> https://www.cse.buffalo.edu/~knepley/
>     <https://www.cse.buffalo.edu/%7Eknepley/>
>     >
>
>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20180604/6048369a/attachment.html>


More information about the petsc-users mailing list