<div dir="ltr">Dear Hong,<div><br></div><div><br></div><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Dec 28, 2016 at 4:42 PM, Hong <span dir="ltr"><<a href="mailto:hzhang@mcs.anl.gov" target="_blank">hzhang@mcs.anl.gov</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">Vijay:</div><div class="gmail_quote">The performance of eigenvalue computation depends on many factors</div><div class="gmail_quote">- matrix features, location of eigenvalues, orthogonalization of eigenvectors</div><div class="gmail_quote">- how many eigensolutions do you compute, largest/smallest spectrum, accuracy </div><div class="gmail_quote">- algorithms used</div><div class="gmail_quote">- computer used ... </div></div></div></blockquote><div><br></div><div class="gmail_extra"><div class="gmail_quote"><div><br class="gmail-Apple-interchange-newline">I've used the krylovshur solver from SLEPc.</div><div>I've asked for two lowest roots within the 1e-10 error bar. </div><div>The matrix has at most 48 nonzero elements per row.</div><div>Here are some details about the cluster:</div></div></div><blockquote style="margin:0px 0px 0px 40px;border:none;padding:0px"><div class="gmail_extra"><div class="gmail_quote">Processor: <span style="color:rgb(0,0,0);text-align:justify"><font face="arial, helvetica, sans-serif">Intel(r) IVYBRIDGE 2,8 Ghz 10 (bisocket)</font></span></div></div></blockquote><blockquote style="margin:0px 0px 0px 40px;border:none;padding:0px"><div class="gmail_extra"><div class="gmail_quote"><span style="color:rgb(0,0,0);text-align:justify"><font face="arial, helvetica, sans-serif">Ram : 64Gb</font></span></div></div></blockquote><div><span style="color:rgb(0,0,0);text-align:justify"><font face="arial, helvetica, sans-serif">Interconnection: I</font></span><span style="color:rgb(0,0,0);font-family:"trebuchet ms",verdana,arial,helvetica,sans-serif;text-align:justify">nfiniband (Full Data Rate ~ 6.89Gb/s)</span></div><div><span style="color:rgb(0,0,0);font-family:"trebuchet ms",verdana,arial,helvetica,sans-serif;text-align:justify"></span> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><span class="gmail-"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><br><div><br></div><div>I'm doing exact diagonalization studies of some phenomenological model Hamiltonian. In this study I have to diagonalize large sparse matrices in Hilbert space of Slater determinants many times.</div></div></blockquote></span><div>Why do you carry out these experiments? For solving this type of problem, I would suggest searching related research publications and compare your results.</div></div></div></div></blockquote></div></div><blockquote style="margin:0px 0px 0px 40px;border:none;padding:0px"><div class="gmail_extra"><div class="gmail_quote"><div><br></div></div></div></blockquote><div class="gmail_extra"><div class="gmail_quote"><div>I'm using a variant of the traditional Double Exchange Hamiltonian.</div><div>I'm interested in a specific part of the parameter space which is not fully explored in the literature. In this region the low energy spectrum is unusually dense (thus the exact diagonalization technique.) To my knowledge such a set of parameters has not been explored before.</div><div><br></div><div>Hope this answers your question...</div><div><br></div><div>Thanks,</div><div> Vijay</div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><span class="gmail-"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div><br></div><div>I've successfully used PETSc + SLEPc to get few smallest eigenvalues. </div><div>For example I've been able to diagonalize a matrix of rank <span style="font-size:12.8px"><i>91454220</i> with 990 processors. This diagonalization took </span><i style="font-size:12.8px">15328.695847 </i><span style="font-size:12.8px">Sec (or <i>4.25</i> Hrs.)</span></div></div></blockquote><div> </div></span><div>The matrix size 91M is quite amazing. </div><span class="gmail-HOEnZb"><font color="#888888"><div><br></div><div>Hong</div></font></span><span class="gmail-"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div><span style="font-size:12.8px"><br></span></div><div><span style="font-size:12.8px">I have two questions:</span></div><div><span style="font-size:12.8px"><br></span></div><div><span style="font-size:12.8px">1. Is this time reasonable, if not, is it possible to optimize further ?</span></div><div><span style="font-size:12.8px"><br></span></div><div><span style="font-size:12.8px">2. I've tried a quick google search but could not find a comprehensive benchmarking of the SLEPc library for sparse matrix diagonalization. Could you point me to a publication/resource which has such a benchmarking ?</span></div><div><span style="font-size:12.8px"><br></span></div><div><span style="font-size:12.8px">Thanks for your help.</span></div><div><span style="font-size:12.8px"><br></span></div><div><span style="font-size:12.8px">PETSc Version: master branch commit: b33322e</span></div><div><span style="font-size:12.8px">SLEPc Version: master branch commit: c596d1c </span></div><div><span style="font-size:12.8px"><br></span></div><div><span style="font-size:12.8px">Best,</span></div><div><span style="font-size:12.8px"> Vijay</span></div><div><span style="font-size:12.8px"><br></span></div><div><span style="font-size:12.8px"><br></span></div></div>
</blockquote></span></div><br></div></div>
</blockquote></div><br></div></div>