[petsc-users] Solving tridiagonal hermitian generalized eigenvalue problem with SLEPC

Hong hzhang at mcs.anl.gov
Mon Aug 18 15:20:55 CDT 2014


Toon:
>
> I think I expressed myself wrong: I can indeed get it to work with just
> using AIJ matrices, as in example 13. This is the way that I am currently
> solving my problem. There are only two issues:
> 1. memory is indeed important so I would certainly like to decrease it by
> one third if possible :-) The goal is to make the simulations as fast and
> light as possible to be able to perform parameter studies (on the stability
> of MHD configurations).

We need find out where the memory is taken. It seems you are using
slepc default krylovschur/sinvert/preonly/lu, lu factorization might
consume the most of memory. Run your code with '-eps_view' and find
out what solver you are using.
>
> 2. I have played around a little bit with the different solvers but it
> appears that the standard method and the Arnoldi with explicit restart
> method are the best. Some of the others don't converge and some are slower.

Your problem is symmetric, so you should use:
problem type: generalized symmetric eigenvalue problem
It seems you are using non-symmetric type. Check it with '-eps_view'.
>
> The thing is that in the end the matrices that I use are large but they have
> a very easy structure: hermitian tri-diagonal. That's why, I think, slepc
> usually converges in a few iterations (correct me if I'm wrong).

The number of iterations does not depend on the data structure of the
matrices, but the location of eigenvalues.
>
> The problem is that sometimes, when I consider more grid points, the solver
> doesn't work any more because apparently it uses the LU decomposition (not
> sure for the matrix A or B in A x = lambda B x) and there is a zero pivot
> element (see below for error message). In other words: the matrices become
> almost singular. This is a characteristic of the numerical method I think.
> Is there any way to fix this by setting a more precise threshold or
> something?

You need carefully select the target of the shift.
>
> Also, there is one more thing: is it possible and useful to use the non-zero
> structure of A when defining matrix B? Does this affect performance?

Yes, use same data structure for A and B would make MatAXPY faster,
but may not affect much of performance if you only use a single shift,
e.g., ex13.c.

Hong

>
> Error messages: identical for KRYLOV-SCHUR and ARNOLDI
>
> [0]PETSC ERROR: --------------------- Error Message
> ------------------------------------
> [0]PETSC ERROR: Detected zero pivot in LU factorization:
> see http://www.mcs.anl.gov/petsc/documentation/faq.html#ZeroPivot!
> [0]PETSC ERROR: Zero pivot row 19 value 6.57877e-16 tolerance 2.22045e-14!
> [0]PETSC ERROR:
> ------------------------------------------------------------------------
> [0]PETSC ERROR: Petsc Release Version 3.4.5, Jun, 29, 2014
> [0]PETSC ERROR: See docs/changes/index.html for recent updates.
> [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
> [0]PETSC ERROR: See docs/index.html for manual pages.
> [0]PETSC ERROR:
> ------------------------------------------------------------------------
> [0]PETSC ERROR: ./PB3D on a debug-complex named toon-XPS-L501X by toon Mon
> Aug 18 21:47:41 2014
> [0]PETSC ERROR: Libraries linked from
> /opt/petsc/petsc-3.4.5/debug-complex/lib
> [0]PETSC ERROR: Configure run at Mon Aug 11 10:08:29 2014
> [0]PETSC ERROR: Configure options PETSC_ARCH=debug-complex
> --with-scalar-type=complex --with-debugging
> [0]PETSC ERROR:
> ------------------------------------------------------------------------
> [0]PETSC ERROR: MatPivotCheck_none() line 589 in
> /opt/petsc/petsc-3.4.5/include/petsc-private/matimpl.h
> [0]PETSC ERROR: MatPivotCheck() line 608 in
> /opt/petsc/petsc-3.4.5/include/petsc-private/matimpl.h
> [0]PETSC ERROR: MatLUFactorNumeric_SeqAIJ_Inode() line 1837 in
> /opt/petsc/petsc-3.4.5/src/mat/impls/aij/seq/inode.c
> [0]PETSC ERROR: MatLUFactorNumeric() line 2889 in
> /opt/petsc/petsc-3.4.5/src/mat/interface/matrix.c
> [0]PETSC ERROR: PCSetUp_LU() line 152 in
> /opt/petsc/petsc-3.4.5/src/ksp/pc/impls/factor/lu/lu.c
> [0]PETSC ERROR: PCSetUp() line 890 in
> /opt/petsc/petsc-3.4.5/src/ksp/pc/interface/precon.c
> [0]PETSC ERROR: KSPSetUp() line 278 in
> /opt/petsc/petsc-3.4.5/src/ksp/ksp/interface/itfunc.c
> [0]PETSC ERROR: PCSetUp_Redundant() line 170 in
> /opt/petsc/petsc-3.4.5/src/ksp/pc/impls/redundant/redundant.c
> [0]PETSC ERROR: PCSetUp() line 890 in
> /opt/petsc/petsc-3.4.5/src/ksp/pc/interface/precon.c
> [0]PETSC ERROR: KSPSetUp() line 278 in
> /opt/petsc/petsc-3.4.5/src/ksp/ksp/interface/itfunc.c
> [0]PETSC ERROR: STSetUp_Shift() line 126 in
> /opt/slepc/slepc-3.4.4/src/st/impls/shift/shift.c
> [0]PETSC ERROR: STSetUp() line 294 in
> /opt/slepc/slepc-3.4.4/src/st/interface/stsolve.c
> [0]PETSC ERROR: EPSSetUp() line 215 in
> /opt/slepc/slepc-3.4.4/src/eps/interface/setup.c
> [0]PETSC ERROR: EPSSolve() line 90 in
> /opt/slepc/slepc-3.4.4/src/eps/interface/solve.c
>
>
>
> On 18 August 2014 20:52, Barry Smith <bsmith at mcs.anl.gov> wrote:
>>
>>
>>    I would recommend just using AIJ matrices and not worry about SBAIJ at
>> this stage. SBAIJ can save you memory but not operations and unless memory
>> is really tight is not likely a useful optimization.
>>
>>    What eigensolvers have you tried and which eigenvalues do you want?
>>
>>    Barry
>>
>> On Aug 18, 2014, at 5:46 AM, Toon Weyens <tweyens at fis.uc3m.es> wrote:
>>
>> > Dear all,
>> >
>> > I am using PETSC and SLEPC to simulate a problem in MHD, described in
>> > http://scitation.aip.org/content/aip/journal/pop/21/4/10.1063/1.4871859.
>> >
>> > I have a bit of experience with MPI but not too much with PETSC and
>> > SLEPC. So after reading both user manuals and also the relevant chapters of
>> > the PETSC developers manual, I still can't get it to work.
>> >
>> > The problem that I have to solve is a large generalized eigenvalue
>> > system where the matrices are both Hermitian by blocks and tridiagonal,
>> > e.g.:
>> >
>> > ( A11  A12    0     0       0   )                      ( B11  B12    0
>> > 0       0   )
>> > ( A12* A22  A23   0      0   )                       ( B12* B22  B23   0
>> > 0   )
>> > (   0    A23* A33  A34   0   )   =   lambda    (   0    B23* B33  B34
>> > 0   )
>> > (   0      0    A34* A44  A45)                       (   0      0
>> > B34* B44  B45)
>> > (   0      0    A45* A55    0  )                       (   0      0
>> > B45* B55    0  )
>> >
>> > where Aii = Aii*, with * the Hermitian conjugate. I apologize for the
>> > ugly representation.
>> >
>> > The dimensions of both A and B are around 50 to 100 blocks (as there is
>> > a block per discretized point) and the blocks themselves can vary from 1 to
>> > more than 100x100 as well (as they correspond to a spectral decomposition).
>> >
>> > Now, my question is: how to solve this economically?
>> >
>> > What I have been trying to do is to make use of the fact that the
>> > matrices are Hermitian and by using matcreatesbaij and through the
>> > recommended matcreate, matsettype(matsbaij), etc.
>> >
>> > Could someone help me out? All help would be greatly appreciated!
>> >
>> > Thank you in advance,
>> > Toon
>> > UC3M
>>
>


More information about the petsc-users mailing list