<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=Windows-1252">
<style type="text/css" style="display:none;"> P {margin-top:0;margin-bottom:0;} </style>
</head>
<body dir="ltr">
<div style="font-family: Calibri, Arial, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
Kuang,</div>
<div style="font-family: Calibri, Arial, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
PETSc supports <span style="color:rgb(32, 31, 30);font-family:"Segoe UI", "Segoe UI Web (West European)", "Segoe UI", -apple-system, "system-ui", Roboto, "Helvetica Neue", sans-serif;font-size:14.6667px;background-color:rgb(255, 255, 255);display:inline !important">MatIsHermitian()<span> for SeqAIJ, IS
and SeqSBAIJ matrix types. What is your matrix type? </span></span></div>
<div style="font-family: Calibri, Arial, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
<span style="color:rgb(32, 31, 30);font-family:"Segoe UI", "Segoe UI Web (West European)", "Segoe UI", -apple-system, "system-ui", Roboto, "Helvetica Neue", sans-serif;font-size:14.6667px;background-color:rgb(255, 255, 255);display:inline !important"><span>We
should be able to add this support to other mat types.</span></span></div>
<div style="font-family: Calibri, Arial, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
<span style="color:rgb(32, 31, 30);font-family:"Segoe UI", "Segoe UI Web (West European)", "Segoe UI", -apple-system, "system-ui", Roboto, "Helvetica Neue", sans-serif;font-size:14.6667px;background-color:rgb(255, 255, 255);display:inline !important"><span>Hong</span></span></div>
<div id="appendonsend"></div>
<hr style="display:inline-block;width:98%" tabindex="-1">
<div id="divRplyFwdMsg" dir="ltr"><font face="Calibri, sans-serif" style="font-size:11pt" color="#000000"><b>From:</b> petsc-users <petsc-users-bounces@mcs.anl.gov> on behalf of Wang, Kuang-chung <kuang-chung.wang@intel.com><br>
<b>Sent:</b> Thursday, December 2, 2021 2:06 PM<br>
<b>To:</b> Jose E. Roman <jroman@dsic.upv.es><br>
<b>Cc:</b> petsc-users@mcs.anl.gov <petsc-users@mcs.anl.gov>; Obradovic, Borna <borna.obradovic@intel.com>; Cea, Stephen M <stephen.m.cea@intel.com><br>
<b>Subject:</b> Re: [petsc-users] Orthogonality of eigenvectors in SLEPC</font>
<div> </div>
</div>
<div class="BodyFragment"><font size="2"><span style="font-size:11pt;">
<div class="PlainText">Thanks Jose for your prompt reply.<br>
I did find my matrix highly non-hermitian. By forcing the solver to be hermtian, the orthogonality was restored.
<br>
But I do need to root cause why my matrix is non-hermitian in the first place. <br>
Along the way, I highly recommend MatIsHermitian() function or combining functions like MatHermitianTranspose () MatAXPY MatNorm to determine the hermiticity to safeguard our program.
<br>
<br>
Best,<br>
Kuang <br>
<br>
-----Original Message-----<br>
From: Jose E. Roman <jroman@dsic.upv.es> <br>
Sent: Wednesday, November 24, 2021 6:20 AM<br>
To: Wang, Kuang-chung <kuang-chung.wang@intel.com><br>
Cc: petsc-users@mcs.anl.gov; Obradovic, Borna <borna.obradovic@intel.com>; Cea, Stephen M <stephen.m.cea@intel.com><br>
Subject: Re: [petsc-users] Orthogonality of eigenvectors in SLEPC<br>
<br>
In Hermitian eigenproblems orthogonality of eigenvectors is guaranteed/enforced. But you are solving the problem as non-Hermitian.<br>
<br>
If your matrix is Hermitian, make sure you solve it as a HEP, and make sure that your matrix is numerically Hermitian.<br>
<br>
If your matrix is non-Hermitian, then you cannot expect the eigenvectors to be orthogonal. What you can do in this case is get an orthogonal basis of the computed eigenspace, see
<a href="https://slepc.upv.es/documentation/current/docs/manualpages/EPS/EPSGetInvariantSubspace.html">
https://slepc.upv.es/documentation/current/docs/manualpages/EPS/EPSGetInvariantSubspace.html</a><br>
<br>
<br>
By the way, version 3.7 is more than 5 years old, it is better if you can upgrade to a more recent version.<br>
<br>
Jose<br>
<br>
<br>
<br>
> El 24 nov 2021, a las 7:15, Wang, Kuang-chung <kuang-chung.wang@intel.com> escribió:<br>
> <br>
> Dear Jose : <br>
> I came across this thread describing issue using krylovschur and finding eigenvectors non-orthogonal.<br>
> <a href="https://lists.mcs.anl.gov/pipermail/petsc-users/2014-October/023360.ht">
https://lists.mcs.anl.gov/pipermail/petsc-users/2014-October/023360.ht</a><br>
> ml<br>
> <br>
> I furthermore have tested by reducing the tolerance as highlighted below from 1e-12 to 1e-16 with no luck.<br>
> Could you please suggest options/sources to try out ? <br>
> Thanks a lot for sharing your knowledge! <br>
> <br>
> Sincere,<br>
> Kuang-Chung Wang<br>
> <br>
> =======================================================<br>
> Kuang-Chung Wang<br>
> Computational and Modeling Technology<br>
> Intel Corporation<br>
> Hillsboro OR 97124<br>
> =======================================================<br>
> <br>
> Here are more info: <br>
> • slepc/3.7.4<br>
> • output message from by doing EPSView(eps,PETSC_NULL):<br>
> EPS Object: 1 MPI processes<br>
> type: krylovschur<br>
> Krylov-Schur: 50% of basis vectors kept after restart<br>
> Krylov-Schur: using the locking variant<br>
> problem type: non-hermitian eigenvalue problem<br>
> selected portion of the spectrum: closest to target: 20.1161 (in magnitude)<br>
> number of eigenvalues (nev): 40<br>
> number of column vectors (ncv): 81<br>
> maximum dimension of projected problem (mpd): 81<br>
> maximum number of iterations: 1000<br>
> tolerance: 1e-12<br>
> convergence test: relative to the eigenvalue BV Object: 1 MPI <br>
> processes<br>
> type: svec<br>
> 82 columns of global length 2988<br>
> vector orthogonalization method: classical Gram-Schmidt<br>
> orthogonalization refinement: always<br>
> block orthogonalization method: Gram-Schmidt<br>
> doing matmult as a single matrix-matrix product DS Object: 1 MPI <br>
> processes<br>
> type: nhep<br>
> ST Object: 1 MPI processes<br>
> type: sinvert<br>
> shift: 20.1161<br>
> number of matrices: 1<br>
> KSP Object: (st_) 1 MPI processes<br>
> type: preonly<br>
> maximum iterations=1000, initial guess is zero<br>
> tolerances: relative=1.12005e-09, absolute=1e-50, divergence=10000.<br>
> left preconditioning<br>
> using NONE norm type for convergence test<br>
> PC Object: (st_) 1 MPI processes<br>
> type: lu<br>
> LU: out-of-place factorization<br>
> tolerance for zero pivot 2.22045e-14<br>
> matrix ordering: nd<br>
> factor fill ratio given 0., needed 0.<br>
> Factored matrix follows:<br>
> Mat Object: 1 MPI processes<br>
> type: seqaij<br>
> rows=2988, cols=2988<br>
> package used to perform factorization: mumps<br>
> total: nonzeros=614160, allocated nonzeros=614160<br>
> total number of mallocs used during MatSetValues calls =0<br>
> MUMPS run parameters:<br>
> SYM (matrix type): 0 <br>
> PAR (host participation): 1 <br>
> ICNTL(1) (output for error): 6 <br>
> ICNTL(2) (output of diagnostic msg): 0 <br>
> ICNTL(3) (output for global info): 0 <br>
> ICNTL(4) (level of printing): 0 <br>
> ICNTL(5) (input mat struct): 0 <br>
> ICNTL(6) (matrix prescaling): 7 <br>
> ICNTL(7) (sequential matrix ordering):7 <br>
> ICNTL(8) (scaling strategy): 77 <br>
> ICNTL(10) (max num of refinements): 0 <br>
> ICNTL(11) (error analysis): 0 <br>
> ICNTL(12) (efficiency control): 1<br>
> ICNTL(13) (efficiency control): 0<br>
> ICNTL(14) (percentage of estimated workspace increase): 20<br>
> ICNTL(18) (input mat struct): 0<br>
> ICNTL(19) (Schur complement info): 0<br>
> ICNTL(20) (rhs sparse pattern): 0<br>
> ICNTL(21) (solution struct): 0<br>
> ICNTL(22) (in-core/out-of-core facility): 0<br>
> ICNTL(23) (max size of memory can be allocated locally):0<br>
> ICNTL(24) (detection of null pivot rows): 0<br>
> ICNTL(25) (computation of a null space basis): 0<br>
> ICNTL(26) (Schur options for rhs or solution): 0<br>
> ICNTL(27) (experimental parameter): -24<br>
> ICNTL(28) (use parallel or sequential ordering): 1<br>
> ICNTL(29) (parallel ordering): 0<br>
> ICNTL(30) (user-specified set of entries in inv(A)): 0<br>
> ICNTL(31) (factors is discarded in the solve phase): 0<br>
> ICNTL(33) (compute determinant): 0<br>
> CNTL(1) (relative pivoting threshold): 0.01<br>
> CNTL(2) (stopping criterion of refinement): 1.49012e-08<br>
> CNTL(3) (absolute pivoting threshold): 0.<br>
> CNTL(4) (value of static pivoting): -1.<br>
> CNTL(5) (fixation for null pivots): 0.<br>
> RINFO(1) (local estimated flops for the elimination after analysis):<br>
> [0] 8.15668e+07 <br>
> RINFO(2) (local estimated flops for the assembly after factorization):<br>
> [0] 892584. <br>
> RINFO(3) (local estimated flops for the elimination after factorization):<br>
> [0] 8.15668e+07 <br>
> INFO(15) (estimated size of (in MB) MUMPS internal data for running numerical factorization):<br>
> [0] 16 <br>
> INFO(16) (size of (in MB) MUMPS internal data used during numerical factorization):<br>
> [0] 16 <br>
> INFO(23) (num of pivots eliminated on this processor after factorization):<br>
> [0] 2988 <br>
> RINFOG(1) (global estimated flops for the elimination after analysis): 8.15668e+07<br>
> RINFOG(2) (global estimated flops for the assembly after factorization): 892584.<br>
> RINFOG(3) (global estimated flops for the elimination after factorization): 8.15668e+07<br>
> (RINFOG(12) RINFOG(13))*2^INFOG(34) (determinant): (0.,0.)*(2^0)<br>
> INFOG(3) (estimated real workspace for factors on all processors after analysis): 614160<br>
> INFOG(4) (estimated integer workspace for factors on all processors after analysis): 31971<br>
> INFOG(5) (estimated maximum front size in the complete tree): 246<br>
> INFOG(6) (number of nodes in the complete tree): 197<br>
> INFOG(7) (ordering option effectively use after analysis): 2<br>
> INFOG(8) (structural symmetry in percent of the permuted matrix after analysis): 100<br>
> INFOG(9) (total real/complex workspace to store the matrix factors after factorization): 614160<br>
> INFOG(10) (total integer space store the matrix factors after factorization): 31971<br>
> INFOG(11) (order of largest frontal matrix after factorization): 246<br>
> INFOG(12) (number of off-diagonal pivots): 0<br>
> INFOG(13) (number of delayed pivots after factorization): 0<br>
> INFOG(14) (number of memory compress after factorization): 0<br>
> INFOG(15) (number of steps of iterative refinement after solution): 0<br>
> INFOG(16) (estimated size (in MB) of all MUMPS internal data for factorization after analysis: value on the most memory consuming processor): 16<br>
> INFOG(17) (estimated size of all MUMPS internal data for factorization after analysis: sum over all processors): 16<br>
> INFOG(18) (size of all MUMPS internal data allocated during factorization: value on the most memory consuming processor): 16<br>
> INFOG(19) (size of all MUMPS internal data allocated during factorization: sum over all processors): 16<br>
> INFOG(20) (estimated number of entries in the factors): 614160<br>
> INFOG(21) (size in MB of memory effectively used during factorization - value on the most memory consuming processor): 14<br>
> INFOG(22) (size in MB of memory effectively used during factorization - sum over all processors): 14<br>
> INFOG(23) (after analysis: value of ICNTL(6) effectively used): 0<br>
> INFOG(24) (after analysis: value of ICNTL(12) effectively used): 1<br>
> INFOG(25) (after factorization: number of pivots modified by static pivoting): 0<br>
> INFOG(28) (after factorization: number of null pivots encountered): 0<br>
> INFOG(29) (after factorization: effective number of entries in the factors (sum over all processors)): 614160<br>
> INFOG(30, 31) (after solution: size in Mbytes of memory used during solution phase): 13, 13<br>
> INFOG(32) (after analysis: type of analysis done): 1<br>
> INFOG(33) (value used for ICNTL(8)): 7<br>
> INFOG(34) (exponent of the determinant if determinant is requested): 0<br>
> linear system matrix = precond matrix:<br>
> Mat Object: 1 MPI processes<br>
> type: seqaij<br>
> rows=2988, cols=2988<br>
> total: nonzeros=151488, allocated nonzeros=151488<br>
> total number of mallocs used during MatSetValues calls =0<br>
> using I-node routines: found 996 nodes, limit used is 5<br>
<br>
</div>
</span></font></div>
</body>
</html>