[petsc-users] Orthogonality of eigenvectors in SLEPC
Jose E. Roman
jroman at dsic.upv.es
Sat Dec 11 01:43:36 CST 2021
Case a is the simplest one to use and enforces B-orthogonality of eigenvectors. Case b can also be used if you have a good reason to do so, and in that case you can recover symmetry (and B-orthogonality) by explicitly setting the inner product matrix as illustrated in ex47.c https://slepc.upv.es/documentation/current/src/eps/tutorials/ex47.c.html
Jose
> El 11 dic 2021, a las 1:10, Wang, Kuang-chung <kuang-chung.wang at intel.com> escribió:
>
> • I was able to use MatIsHermitian function successfully since my matrix type is seqaij. Just hoped that it can return MatNorm (H -H^+) with this function. But it wasn’t hard for me to implement that with the function listed in the previous email.
> • I have a related question. In user manual https://slepc.upv.es/documentation/slepc.pdf 2.1, it says that Ax=\lambda B x problem is usually reformulated to B^-1 Ax =\lambda x. If A matrix is Hermitian, B is diagonal but Bii and Bjj can be different.
> a) Will solving “Ax=\lambda B x” directly with slepc guarantees users receiving orthogonal eigenvectors? Namely, does xi^T*B*xj=delta_ij hold true?
> b) if we reformulate, (B^-1A) x = \lambda x will yield (B^-1 A) to be non-hermitian and therefore doesn’t give orthogonal eigenvectors ( pointed out by Jose). Xi^T*xj != delta_ij. What about xi^T*B*xj=delta_ij, will this be guaranteed(since this is the same problem as “a” )? Currently, my test is that xi^T*B*xj=delta_ij is no longer true.
> To help with visibility:
> <image002.jpg>
>
> Although a and b are the same problem formulated differently, but the orthogonality isn’t ensured in the case b while in case a is ensured(?) .
> If so, does it mean that we should be encouraged to use case a ?
>
> Best,
> Kuang
>
>
> From: Zhang, Hong <hzhang at mcs.anl.gov>
> Sent: Thursday, December 2, 2021 2:18 PM
> To: Wang, Kuang-chung <kuang-chung.wang at intel.com>; Jose E. Roman <jroman at dsic.upv.es>
> Cc: petsc-users at mcs.anl.gov; Obradovic, Borna <borna.obradovic at intel.com>; Cea, Stephen M <stephen.m.cea at intel.com>
> Subject: Re: [petsc-users] Orthogonality of eigenvectors in SLEPC
>
> Kuang,
> PETSc supports MatIsHermitian() for SeqAIJ, IS and SeqSBAIJ matrix types. What is your matrix type?
> We should be able to add this support to other mat types.
> Hong
> From: petsc-users <petsc-users-bounces at mcs.anl.gov> on behalf of Wang, Kuang-chung <kuang-chung.wang at intel.com>
> Sent: Thursday, December 2, 2021 2:06 PM
> To: Jose E. Roman <jroman at dsic.upv.es>
> Cc: petsc-users at mcs.anl.gov <petsc-users at mcs.anl.gov>; Obradovic, Borna <borna.obradovic at intel.com>; Cea, Stephen M <stephen.m.cea at intel.com>
> Subject: Re: [petsc-users] Orthogonality of eigenvectors in SLEPC
>
> Thanks Jose for your prompt reply.
> I did find my matrix highly non-hermitian. By forcing the solver to be hermtian, the orthogonality was restored.
> But I do need to root cause why my matrix is non-hermitian in the first place.
> Along the way, I highly recommend MatIsHermitian() function or combining functions like MatHermitianTranspose () MatAXPY MatNorm to determine the hermiticity to safeguard our program.
>
> Best,
> Kuang
>
> -----Original Message-----
> From: Jose E. Roman <jroman at dsic.upv.es>
> Sent: Wednesday, November 24, 2021 6:20 AM
> To: Wang, Kuang-chung <kuang-chung.wang at intel.com>
> Cc: petsc-users at mcs.anl.gov; Obradovic, Borna <borna.obradovic at intel.com>; Cea, Stephen M <stephen.m.cea at intel.com>
> Subject: Re: [petsc-users] Orthogonality of eigenvectors in SLEPC
>
> In Hermitian eigenproblems orthogonality of eigenvectors is guaranteed/enforced. But you are solving the problem as non-Hermitian.
>
> If your matrix is Hermitian, make sure you solve it as a HEP, and make sure that your matrix is numerically Hermitian.
>
> If your matrix is non-Hermitian, then you cannot expect the eigenvectors to be orthogonal. What you can do in this case is get an orthogonal basis of the computed eigenspace, seehttps://slepc.upv.es/documentation/current/docs/manualpages/EPS/EPSGetInvariantSubspace.html
>
>
> By the way, version 3.7 is more than 5 years old, it is better if you can upgrade to a more recent version.
>
> Jose
>
>
>
> > El 24 nov 2021, a las 7:15, Wang, Kuang-chung <kuang-chung.wang at intel.com> escribió:
> >
> > Dear Jose :
> > I came across this thread describing issue using krylovschur and finding eigenvectors non-orthogonal.
> > https://lists.mcs.anl.gov/pipermail/petsc-users/2014-October/023360.ht
> > ml
> >
> > I furthermore have tested by reducing the tolerance as highlighted below from 1e-12 to 1e-16 with no luck.
> > Could you please suggest options/sources to try out ?
> > Thanks a lot for sharing your knowledge!
> >
> > Sincere,
> > Kuang-Chung Wang
> >
> > =======================================================
> > Kuang-Chung Wang
> > Computational and Modeling Technology
> > Intel Corporation
> > Hillsboro OR 97124
> > =======================================================
> >
> > Here are more info:
> > • slepc/3.7.4
> > • output message from by doing EPSView(eps,PETSC_NULL):
> > EPS Object: 1 MPI processes
> > type: krylovschur
> > Krylov-Schur: 50% of basis vectors kept after restart
> > Krylov-Schur: using the locking variant
> > problem type: non-hermitian eigenvalue problem
> > selected portion of the spectrum: closest to target: 20.1161 (in magnitude)
> > number of eigenvalues (nev): 40
> > number of column vectors (ncv): 81
> > maximum dimension of projected problem (mpd): 81
> > maximum number of iterations: 1000
> > tolerance: 1e-12
> > convergence test: relative to the eigenvalue BV Object: 1 MPI
> > processes
> > type: svec
> > 82 columns of global length 2988
> > vector orthogonalization method: classical Gram-Schmidt
> > orthogonalization refinement: always
> > block orthogonalization method: Gram-Schmidt
> > doing matmult as a single matrix-matrix product DS Object: 1 MPI
> > processes
> > type: nhep
> > ST Object: 1 MPI processes
> > type: sinvert
> > shift: 20.1161
> > number of matrices: 1
> > KSP Object: (st_) 1 MPI processes
> > type: preonly
> > maximum iterations=1000, initial guess is zero
> > tolerances: relative=1.12005e-09, absolute=1e-50, divergence=10000.
> > left preconditioning
> > using NONE norm type for convergence test
> > PC Object: (st_) 1 MPI processes
> > type: lu
> > LU: out-of-place factorization
> > tolerance for zero pivot 2.22045e-14
> > matrix ordering: nd
> > factor fill ratio given 0., needed 0.
> > Factored matrix follows:
> > Mat Object: 1 MPI processes
> > type: seqaij
> > rows=2988, cols=2988
> > package used to perform factorization: mumps
> > total: nonzeros=614160, allocated nonzeros=614160
> > total number of mallocs used during MatSetValues calls =0
> > MUMPS run parameters:
> > SYM (matrix type): 0
> > PAR (host participation): 1
> > ICNTL(1) (output for error): 6
> > ICNTL(2) (output of diagnostic msg): 0
> > ICNTL(3) (output for global info): 0
> > ICNTL(4) (level of printing): 0
> > ICNTL(5) (input mat struct): 0
> > ICNTL(6) (matrix prescaling): 7
> > ICNTL(7) (sequential matrix ordering):7
> > ICNTL(8) (scaling strategy): 77
> > ICNTL(10) (max num of refinements): 0
> > ICNTL(11) (error analysis): 0
> > ICNTL(12) (efficiency control): 1
> > ICNTL(13) (efficiency control): 0
> > ICNTL(14) (percentage of estimated workspace increase): 20
> > ICNTL(18) (input mat struct): 0
> > ICNTL(19) (Schur complement info): 0
> > ICNTL(20) (rhs sparse pattern): 0
> > ICNTL(21) (solution struct): 0
> > ICNTL(22) (in-core/out-of-core facility): 0
> > ICNTL(23) (max size of memory can be allocated locally):0
> > ICNTL(24) (detection of null pivot rows): 0
> > ICNTL(25) (computation of a null space basis): 0
> > ICNTL(26) (Schur options for rhs or solution): 0
> > ICNTL(27) (experimental parameter): -24
> > ICNTL(28) (use parallel or sequential ordering): 1
> > ICNTL(29) (parallel ordering): 0
> > ICNTL(30) (user-specified set of entries in inv(A)): 0
> > ICNTL(31) (factors is discarded in the solve phase): 0
> > ICNTL(33) (compute determinant): 0
> > CNTL(1) (relative pivoting threshold): 0.01
> > CNTL(2) (stopping criterion of refinement): 1.49012e-08
> > CNTL(3) (absolute pivoting threshold): 0.
> > CNTL(4) (value of static pivoting): -1.
> > CNTL(5) (fixation for null pivots): 0.
> > RINFO(1) (local estimated flops for the elimination after analysis):
> > [0] 8.15668e+07
> > RINFO(2) (local estimated flops for the assembly after factorization):
> > [0] 892584.
> > RINFO(3) (local estimated flops for the elimination after factorization):
> > [0] 8.15668e+07
> > INFO(15) (estimated size of (in MB) MUMPS internal data for running numerical factorization):
> > [0] 16
> > INFO(16) (size of (in MB) MUMPS internal data used during numerical factorization):
> > [0] 16
> > INFO(23) (num of pivots eliminated on this processor after factorization):
> > [0] 2988
> > RINFOG(1) (global estimated flops for the elimination after analysis): 8.15668e+07
> > RINFOG(2) (global estimated flops for the assembly after factorization): 892584.
> > RINFOG(3) (global estimated flops for the elimination after factorization): 8.15668e+07
> > (RINFOG(12) RINFOG(13))*2^INFOG(34) (determinant): (0.,0.)*(2^0)
> > INFOG(3) (estimated real workspace for factors on all processors after analysis): 614160
> > INFOG(4) (estimated integer workspace for factors on all processors after analysis): 31971
> > INFOG(5) (estimated maximum front size in the complete tree): 246
> > INFOG(6) (number of nodes in the complete tree): 197
> > INFOG(7) (ordering option effectively use after analysis): 2
> > INFOG(8) (structural symmetry in percent of the permuted matrix after analysis): 100
> > INFOG(9) (total real/complex workspace to store the matrix factors after factorization): 614160
> > INFOG(10) (total integer space store the matrix factors after factorization): 31971
> > INFOG(11) (order of largest frontal matrix after factorization): 246
> > INFOG(12) (number of off-diagonal pivots): 0
> > INFOG(13) (number of delayed pivots after factorization): 0
> > INFOG(14) (number of memory compress after factorization): 0
> > INFOG(15) (number of steps of iterative refinement after solution): 0
> > INFOG(16) (estimated size (in MB) of all MUMPS internal data for factorization after analysis: value on the most memory consuming processor): 16
> > INFOG(17) (estimated size of all MUMPS internal data for factorization after analysis: sum over all processors): 16
> > INFOG(18) (size of all MUMPS internal data allocated during factorization: value on the most memory consuming processor): 16
> > INFOG(19) (size of all MUMPS internal data allocated during factorization: sum over all processors): 16
> > INFOG(20) (estimated number of entries in the factors): 614160
> > INFOG(21) (size in MB of memory effectively used during factorization - value on the most memory consuming processor): 14
> > INFOG(22) (size in MB of memory effectively used during factorization - sum over all processors): 14
> > INFOG(23) (after analysis: value of ICNTL(6) effectively used): 0
> > INFOG(24) (after analysis: value of ICNTL(12) effectively used): 1
> > INFOG(25) (after factorization: number of pivots modified by static pivoting): 0
> > INFOG(28) (after factorization: number of null pivots encountered): 0
> > INFOG(29) (after factorization: effective number of entries in the factors (sum over all processors)): 614160
> > INFOG(30, 31) (after solution: size in Mbytes of memory used during solution phase): 13, 13
> > INFOG(32) (after analysis: type of analysis done): 1
> > INFOG(33) (value used for ICNTL(8)): 7
> > INFOG(34) (exponent of the determinant if determinant is requested): 0
> > linear system matrix = precond matrix:
> > Mat Object: 1 MPI processes
> > type: seqaij
> > rows=2988, cols=2988
> > total: nonzeros=151488, allocated nonzeros=151488
> > total number of mallocs used during MatSetValues calls =0
> > using I-node routines: found 996 nodes, limit used is 5
>
More information about the petsc-users
mailing list