<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">Vijay:</div><div class="gmail_quote">The performance of eigenvalue computation depends on many factors</div><div class="gmail_quote">- matrix features, location of eigenvalues, orthogonalization of eigenvectors</div><div class="gmail_quote">- how many eigensolutions do you compute, largest/smallest spectrum, accuracy </div><div class="gmail_quote">- algorithms used</div><div class="gmail_quote">- computer used ...</div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><br><div><br></div><div>I'm doing exact diagonalization studies of some phenomenological model Hamiltonian. In this study I have to diagonalize large sparse matrices in Hilbert space of Slater determinants many times.</div></div></blockquote><div>Why do you carry out these experiments? For solving this type of problem, I would suggest searching related research publications and compare your results.</div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><br></div><div>I've successfully used PETSc + SLEPc to get few smallest eigenvalues. </div><div>For example I've been able to diagonalize a matrix of rank <span style="font-size:12.8px"><i>91454220</i> with 990 processors. This diagonalization took </span><i style="font-size:12.8px">15328.695847 </i><span style="font-size:12.8px">Sec (or <i>4.25</i> Hrs.)</span></div></div></blockquote><div> </div><div>The matrix size 91M is quite amazing. </div><div><br></div><div>Hong</div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><span style="font-size:12.8px"><br></span></div><div><span style="font-size:12.8px">I have two questions:</span></div><div><span style="font-size:12.8px"><br></span></div><div><span style="font-size:12.8px">1. Is this time reasonable, if not, is it possible to optimize further ?</span></div><div><span style="font-size:12.8px"><br></span></div><div><span style="font-size:12.8px">2. I've tried a quick google search but could not find a comprehensive benchmarking of the SLEPc library for sparse matrix diagonalization. Could you point me to a publication/resource which has such a benchmarking ?</span></div><div><span style="font-size:12.8px"><br></span></div><div><span style="font-size:12.8px">Thanks for your help.</span></div><div><span style="font-size:12.8px"><br></span></div><div><span style="font-size:12.8px">PETSc Version: master branch commit: b33322e</span></div><div><span style="font-size:12.8px">SLEPc Version: master branch commit: c596d1c </span></div><div><span style="font-size:12.8px"><br></span></div><div><span style="font-size:12.8px">Best,</span></div><div><span style="font-size:12.8px"> Vijay</span></div><div><span style="font-size:12.8px"><br></span></div><div><span style="font-size:12.8px"><br></span></div></div>
</blockquote></div><br></div></div>