On Fri, Dec 16, 2011 at 5:59 PM, Dave Nystrom <span dir="ltr"><<a href="mailto:Dave.Nystrom@tachyonlogic.com">Dave.Nystrom@tachyonlogic.com</a>></span> wrote:<br><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Barry Smith writes:<br>
><br>
> On Dec 16, 2011, at 9:52 AM, Matthew Knepley wrote:<br>
><br>
> > On Fri, Dec 16, 2011 at 9:37 AM, Dave Nystrom <<a href="mailto:dnystrom1@comcast.net">dnystrom1@comcast.net</a>> wrote:<br>
> > I'm trying to figure out whether I can do a couple of things with petsc.<br>
> ><br>
> > 1. It looks like the preconditioning matrix can actually be different from<br>
> > the full problem matrix. So I'm wondering if I could provide a different<br>
> > preconditioning matrix for my problem and then do an LU solve of the<br>
> > preconditioning matrix using the -pc_type lu as my preconditioner.<br>
> ><br>
> > Yes, that is what it is for.<br>
> ><br>
> > 2. When I build petsc, I use the --download-f-blas-lapack=yes option. I'm<br>
> > wondering if petsc uses lapack under the hood or has the capability to use<br>
> > lapack under the hood when one uses the -pc_type lu option. In particular,<br>
> > since my matrices are band matrices from doing a discretization on a 2d<br>
> > regular mesh, I'm wondering if the petsc lu solve has the ability to use the<br>
> > lapack band solver dgbsv or dgbsvx. Or is it possible to use the lapack band<br>
> > solver through one of the external packages that petsc can interface with.<br>
> > I'm interested in this capability for smaller problem sizes that fit on a<br>
> > single node and that make sense.<br>
> ><br>
> > We do not have any banded matrix stuff. Its either dense or sparse right now.<br>
><br>
> Dave,<br>
><br>
> As I noted before, your band is so large that lapack type band solvers<br>
> don't make sense. Using a general purpose sparse direct solver will be<br>
> much more efficient than using the lapack band solver.<br>
<br>
Thanks. I guess I don't really know what a "general purpose sparse direct<br>
solver" is. When I do a direct solve with petsc of one of my linear systems<br>
using "-ksp_type preonly -pc_type lu", and letting petsc decide on the matrix<br>
storage type, is petsc using a "general purpose sparse direct solver"? I<br></blockquote><div><br></div><div>No, its dense.</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
assume a lapack type band solver would fill in the the band width which in my<br>
case should be a bandwidth of about 2*nx+1. But I guess I thought that was<br>
just the price of doing a direct solve on a banded matrix. Does a "general<br></blockquote><div><br></div><div>No one would use a band solver for a sqrt(n) band.</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
purpose sparse direct solver" somehow use less fill in? Could you point me<br>
to a good reference on general purpose sparse direct solvers?<br></blockquote><div><br></div><div>Any of the torrent of papers about SuperLU or MUMPS. Googable.</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
I have been finding that for one or two of my linear systems on a single node<br>
that using "-ksp_type preonly -pc_type lu" for small enough mesh sizes is the<br>
most efficient way to solve the system. So I have been wondering if I could<br>
find a highly tuned and optimized package to do the solve, for instance Intel<br>
mkl, that might run well with multiple threads. And Intel mkl does have a<br>
band solver that I assume is highly optimized. But I was not aware that a<br>
general purpose sparse direct solver would be computationally more efficient<br>
than a highly tuned band solver. But then again, I don't really understand<br>
what the difference is between the two.<br></blockquote><div><br></div><div>No, Barry is correct here.</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
Given that I am having good results with "-ksp_type preonly -pc_type lu" with<br>
petsc for a couple of my systems with small enough problem sizes, would it be<br>
beneficial for me to try one of the external packages such as superlu or<br>
mumps or some other package?<br></blockquote><div><br></div><div>Yes.</div><div><br></div><div> Matt</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Thanks,<br>
<br>
Dave<br>
<br>
> Barry<br>
><br>
> ><br>
> > 3. I'm also wondering how I might be able to learn more about the petsc ilu<br>
> > capability. My impression is that it does ilu(k) and I have tried it with<br>
> > k>0 but am wondering if one of the options might allow it to do ilut and<br>
> > whether as k gets big whether ilu(k) approximates lu. I currently do not<br>
> > understand the petsc ilu well enough to know how much extra fill I get as I<br>
> > increase k and where that extra fill might be located for the case of a band<br>
> > matrix that one gets from discretization on a regular 2d mesh.<br>
> ><br>
> > We do not do ilu(dt). Its complicated, and we determined that it was not worth<br>
> > the effort. You can get that from Hypre is you want. Certainly, for big enough<br>
> > k, ilu(k) is lu but its a slow way to do it.<br>
> ><br>
> > Matt<br>
> ><br>
> > Thanks,<br>
> ><br>
> > Dave<br>
> ><br>
> ><br>
> ><br>
> > --<br>
> > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>
> > -- Norbert Wiener<br>
><br>
</blockquote></div><br><br clear="all"><div><br></div>-- <br>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>
-- Norbert Wiener<br>