Chetan,<br><br>I completely agree that (rank-revealing) QR should be the first choice. As a side note, from what he has said, his matrix is actually dense. <br><br>If his matrix was sparse, I am not sure how much sparsity would be lost by the column pivoting inherent in a rank-revealing QR. I know that the MUMPS group is either working on or has finished a sparse QR, but I don't know any details about their approach to pivoting (though I would be very interested!). Hopefully it could simply reuse the threshold approach used for sparse LU and LDL.<br>
<br>Jack<br><br><div class="gmail_quote">On Mon, Dec 19, 2011 at 1:38 PM, Chetan Jhurani <span dir="ltr"><<a href="mailto:chetan.jhurani@gmail.com">chetan.jhurani@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0pt 0pt 0pt 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div link="blue" vlink="purple" lang="EN-US"><div><div class="im"><p class="MsoNormal"><span style="font-size:11pt;font-family:"Verdana","sans-serif"">> It can be further optimized using the Woodbury identity for two cases –<u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:11pt;font-family:"Verdana","sans-serif"">> rank <= size or rank >= size to reduce the size of auxiliary internal Cholesky solve.<u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:11pt;font-family:"Verdana","sans-serif""><u></u> <u></u></span></p></div><p class="MsoNormal"><span style="font-size:11pt;font-family:"Verdana","sans-serif"">Sorry, that’s wrong. I meant rank <= size/2 or rank >= size/2. Size here<u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:11pt;font-family:"Verdana","sans-serif"">refers to square matrix size, but this analysis can be done for rectangular<u></u><u></u></span></p><p class="MsoNormal">
<span style="font-size:11pt;font-family:"Verdana","sans-serif"">case too.<u></u><u></u></span></p><p class="MsoNormal"><span style="font-size:11pt;font-family:"Verdana","sans-serif""><u></u> <u></u></span></p>
<p class="MsoNormal"><span style="font-size:11pt;font-family:"Verdana","sans-serif"">Chetan<u></u><u></u></span></p><p class="MsoNormal"><span style="font-size:11pt;font-family:"Verdana","sans-serif""><u></u> <u></u></span></p>
<div style="border-width:medium medium medium 1.5pt;border-style:none none none solid;border-color:-moz-use-text-color -moz-use-text-color -moz-use-text-color blue;padding:0in 0in 0in 4pt"><div><div style="border-width:1pt medium medium;border-style:solid none none;border-color:rgb(181,196,223) -moz-use-text-color -moz-use-text-color;padding:3pt 0in 0in">
<p class="MsoNormal"><b><span style="font-size:10pt;font-family:"Tahoma","sans-serif"">From:</span></b><span style="font-size:10pt;font-family:"Tahoma","sans-serif""> Chetan Jhurani [mailto:<a href="mailto:chetan.jhurani@gmail.com" target="_blank">chetan.jhurani@gmail.com</a>] <br>
<b>Sent:</b> Monday, December 19, 2011 11:35 AM<br><b>To:</b> 'PETSc users list'<br><b>Subject:</b> RE: [petsc-users] Pseudoinverse of a large matrix<u></u><u></u></span></p></div></div><div class="im"><p class="MsoNormal">
<u></u> <u></u></p><p class="MsoNormal"><span style="font-size:11pt;font-family:"Verdana","sans-serif""><u></u> <u></u></span></p><p class="MsoNormal"><span style="font-size:11pt;font-family:"Verdana","sans-serif"">Couple of optimizations not there currently.<u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:11pt;font-family:"Verdana","sans-serif""><u></u> <u></u></span></p><p><u></u><span style="font-size:11pt;font-family:"Verdana","sans-serif""><span>1.<span style="font:7pt "Times New Roman""> </span></span></span><u></u><span style="font-size:11pt;font-family:"Verdana","sans-serif"">It can be changed to use sparse RRQR factorization for sparse input matrix.<u></u><u></u></span></p>
<p><u></u><span style="font-size:11pt;font-family:"Verdana","sans-serif""><span>2.<span style="font:7pt "Times New Roman""> </span></span></span><u></u><span style="font-size:11pt;font-family:"Verdana","sans-serif"">It can be further optimized using the Woodbury identity for two cases –<u></u><u></u></span></p>
<p><span style="font-size:11pt;font-family:"Verdana","sans-serif"">rank <= size or rank >= size to reduce the size of auxiliary internal Cholesky solve.<u></u><u></u></span></p><p class="MsoNormal">
<span style="font-size:11pt;font-family:"Verdana","sans-serif""><u></u> <u></u></span></p><p class="MsoNormal"><span style="font-size:11pt;font-family:"Verdana","sans-serif"">Chetan<u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:11pt;font-family:"Verdana","sans-serif""><u></u> <u></u></span></p><p class="MsoNormal"><span style="font-size:11pt;font-family:"Verdana","sans-serif""><u></u> <u></u></span></p>
</div><div style="border-width:medium medium medium 1.5pt;border-style:none none none solid;border-color:-moz-use-text-color -moz-use-text-color -moz-use-text-color blue;padding:0in 0in 0in 4pt"><div class="im"><div><div style="border-width:1pt medium medium;border-style:solid none none;border-color:rgb(181,196,223) -moz-use-text-color -moz-use-text-color;padding:3pt 0in 0in">
<p class="MsoNormal"><b><span style="font-size:10pt;font-family:"Tahoma","sans-serif"">From:</span></b><span style="font-size:10pt;font-family:"Tahoma","sans-serif""> <a href="mailto:petsc-users-bounces@mcs.anl.gov" target="_blank">petsc-users-bounces@mcs.anl.gov</a> [<a href="mailto:petsc-users-bounces@mcs.anl.gov" target="_blank">mailto:petsc-users-bounces@mcs.anl.gov</a>] <b>On Behalf Of </b>Modhurita Mitra<br>
<b>Sent:</b> Monday, December 19, 2011 11:01 AM<br><b>To:</b> PETSc users list<br><b>Subject:</b> Re: [petsc-users] Pseudoinverse of a large matrix<u></u><u></u></span></p></div></div><p class="MsoNormal"><u></u> <u></u></p>
</div><p class="MsoNormal" style="margin-bottom:12pt">I am trying to express the radiation pattern of an antenna in terms of spherical harmonic basis functions. I have radiation pattern data at N=324360 points. Therefore, my matrix is spherical harmonic basis functions evaluated till order sqrt(N) (which makes up at total of N elements), evaluated at N data points. So this is a linear least squares problem, and I have been trying to solve it by finding its pseudoinverse which uses SVD. The elements of the matrix are complex, and the matrix is non-sparse. I have solved it in MATLAB using a subsample of the data, but MATLAB runs out of memory while calculating the SVD at input matrix size 2500 X 2500. I need to solve this problem using the entire data.</p>
<div><div class="h5"><br><br>I was thinking of using SLEPc because I found a ready-to-use code which computes the SVD of a complex-valued matrix ( <a href="http://www.grycap.upv.es/slepc/documentation/current/src/svd/examples/tutorials/ex14.c.html" target="_blank">http://www.grycap.upv.es/slepc/documentation/current/src/svd/examples/tutorials/ex14.c.html</a> ). I don't know how to use either PETSc or SLEPc (or Elemental) yet, so I am trying to figure out where to start and what I should learn. <br>
<br>Thanks,<br>Modhurita<u></u><u></u></div></div><div><div class="h5"><div><p class="MsoNormal">On Mon, Dec 19, 2011 at 12:31 PM, Matthew Knepley <<a href="mailto:knepley@gmail.com" target="_blank">knepley@gmail.com</a>> wrote:<u></u><u></u></p>
<div><p class="MsoNormal">On Mon, Dec 19, 2011 at 12:21 PM, Modhurita Mitra <<a href="mailto:modhurita@gmail.com" target="_blank">modhurita@gmail.com</a>> wrote:<u></u><u></u></p></div><div><div><blockquote style="border-width:medium medium medium 1pt;border-style:none none none solid;border-color:-moz-use-text-color -moz-use-text-color -moz-use-text-color rgb(204,204,204);padding:0in 0in 0in 6pt;margin:5pt 0in 5pt 4.8pt">
<p class="MsoNormal">Hi,<br><br>I have to compute the pseudoinverse of a 324360 X 324360 matrix. Can PETSc compute the SVD of this matrix without parallelization? If parallelization is needed, do I need to use SLEPc?<u></u><u></u></p>
</blockquote><div><p class="MsoNormal"><u></u> <u></u></p></div></div><div><p class="MsoNormal">With enough memory, yes. However, I am not sure you want to wait. I am not sure how SLEPc would help here.<u></u><u></u></p></div>
<div><p class="MsoNormal">From the very very little detail you have given, you would need parallel linear algebra, like Elemental. However,<u></u><u></u></p></div><div><p class="MsoNormal">I would start out from a more fundamental viewpoint. Such as replacing "compute the psuedoinverse" with<u></u><u></u></p>
</div><div><p class="MsoNormal">"solve a least-squares problem" if that is indeed the case.<u></u><u></u></p></div><div><p class="MsoNormal"><u></u> <u></u></p></div><div><p class="MsoNormal"> Matt<u></u><u></u></p>
</div><div><p class="MsoNormal"> <u></u><u></u></p></div><blockquote style="border-width:medium medium medium 1pt;border-style:none none none solid;border-color:-moz-use-text-color -moz-use-text-color -moz-use-text-color rgb(204,204,204);padding:0in 0in 0in 6pt;margin:5pt 0in 5pt 4.8pt">
<p class="MsoNormal"><br>Thanks,<br>Modhurita<u></u><u></u></p></blockquote></div><p class="MsoNormal"><span style="color:rgb(136,136,136)"><br><br clear="all"><u></u><u></u></span></p><div><p class="MsoNormal"><span style="color:rgb(136,136,136)"><u></u> <u></u></span></p>
</div><p class="MsoNormal"><span style="color:rgb(136,136,136)">-- <br>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>
-- Norbert Wiener</span><u></u><u></u></p></div><p class="MsoNormal"><u></u> <u></u></p></div></div></div></div></div></div></blockquote></div><br>