<div dir="ltr"><div dir="ltr">On Mon, Feb 8, 2021 at 11:40 AM Dave May <<a href="mailto:dave.mayhem23@gmail.com">dave.mayhem23@gmail.com</a>> wrote:<br></div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div>On Mon 8. Feb 2021 at 15:49, Matthew Knepley <<a href="mailto:knepley@gmail.com" target="_blank">knepley@gmail.com</a>> wrote:<br></div><div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr">On Mon, Feb 8, 2021 at 9:37 AM Jose E. Roman <<a href="mailto:jroman@dsic.upv.es" target="_blank">jroman@dsic.upv.es</a>> wrote:<br></div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">The problem can be written as A0*v=omega*B0*v and you want the eigenvalues omega closest to zero. If the matrices were explicitly available, you would do shift-and-invert with target=0, that is<br>
<br>
(A0-sigma*B0)^{-1}*B0*v=theta*v for sigma=0, that is<br>
<br>
A0^{-1}*B0*v=theta*v<br>
<br>
and you compute EPS_LARGEST_MAGNITUDE eigenvalues theta=1/omega.<br>
<br>
Matt: I guess you should have EPS_LARGEST_MAGNITUDE instead of EPS_SMALLEST_REAL in your code. Are you getting the eigenvalues you need? EPS_SMALLEST_REAL will give slow convergence.<br></blockquote><div><br></div><div>Thanks Jose! I am not understanding some step. I want the smallest eigenvalues. Should I use EPS_SMALLEST_MAGNITUDE? I appear to get what I want</div><div>using SMALLEST_REAL, but as you say it might be slower than it has to be.</div></div></div></blockquote><div dir="auto"><br></div><div dir="auto"><br></div><div dir="auto">With shift-and-invert you want to use EPS_LARGEST_MAGNITUDE as Jose says.</div><div dir="auto">The largest magnitude v eigenvalues you obtain (see Jose equation above) from the transformed system correspond to the smallest magnitude omega eigenvalues of the original problem.</div></div></div></blockquote><div><br></div><div>Okay. In my system for BAMG, however, I do not have 1/\omega, but just \lambda, so I think it should be EPS_SMALLEST_MAGNITUDE now. I can check that.</div><div><br></div><div> Thanks,</div><div><br></div><div> Matt</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><div class="gmail_quote"><div dir="auto">Cheers</div><div dir="auto">Dave</div><div dir="auto"><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div></div><div><br></div><div>Also, sometime I would like to talk about incorporating the multilevel eigensolver. I am sure you could make lots of improvements to my initial attempt. I will send</div><div>you a separate email, since I am getting serious about testing it.</div><div><br></div><div> Thanks,</div><div><br></div><div> Matt</div></div></div><div dir="ltr"><div class="gmail_quote"><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
Florian: I would not recommend setting the KSP matrices directly, it may produce strange side-effects. We should have an interface function to pass this matrix. Currently there is STPrecondSetMatForPC() but it has two problems: (1) it is intended for STPRECOND, so cannot be used with Krylov-Schur, and (2) it is not currently available in the python interface.<br>
<br>
The approach used by Matt is a workaround that does not use ST, so you can handle linear solves with a KSP of your own.<br>
<br>
As an alternative, since your problem is symmetric, you could try LOBPCG, assuming that the leftmost eigenvalues are those that you want (e.g. if all eigenvalues are non-negative). In that case you could use STPrecondSetMatForPC(), but the remaining issue is calling it from python.<br>
<br>
If you are using the git repo, I could add the relevant code.<br>
<br>
Jose<br>
<br>
<br>
<br>
> El 8 feb 2021, a las 14:22, Matthew Knepley <<a href="mailto:knepley@gmail.com" target="_blank">knepley@gmail.com</a>> escribió:<br>
> <br>
> On Mon, Feb 8, 2021 at 7:04 AM Florian Bruckner <<a href="mailto:e0425375@gmail.com" target="_blank">e0425375@gmail.com</a>> wrote:<br>
> Dear PETSc / SLEPc Users,<br>
> <br>
> my question is very similar to the one posted here: <br>
> <a href="https://lists.mcs.anl.gov/pipermail/petsc-users/2018-August/035878.html" rel="noreferrer" target="_blank">https://lists.mcs.anl.gov/pipermail/petsc-users/2018-August/035878.html</a><br>
> <br>
> The eigensystem I would like to solve looks like:<br>
> B0 v = 1/omega A0 v<br>
> B0 and A0 are both hermitian, A0 is positive definite, but only given as a linear operator (matshell). I am looking for the largest eigenvalues (=smallest omega). <br>
> <br>
> I also have a sparse approximation P0 of the A0 operator, which i would like to use as precondtioner, using something like this:<br>
> <br>
> es = SLEPc.EPS().create(comm=fd.COMM_WORLD)<br>
> st = es.getST()<br>
> ksp = st.getKSP()<br>
> ksp.setOperators(self.A0, self.P0)<br>
> <br>
> Unfortunately PETSc still complains that it cannot create a preconditioner for a type 'python' matrix although P0.type == 'seqaij' (but A0.type == 'python'). <br>
> By the way, should P0 be an approximation of A0 or does it have to include B0?<br>
> <br>
> Right now I am using the krylov-schur method. Are there any alternatives if A0 is only given as an operator?<br>
> <br>
> Jose can correct me if I say something wrong.<br>
> <br>
> When I did this, I made a shell operator for the action of A0^{-1} B0 which has a KSPSolve() in it, so you can use your P0 preconditioning matrix, and<br>
> then handed that to EPS. You can see me do it here:<br>
> <br>
> <a href="https://gitlab.com/knepley/bamg/-/blob/master/src/coarse/bamgCoarseSpace.c#L123" rel="noreferrer" target="_blank">https://gitlab.com/knepley/bamg/-/blob/master/src/coarse/bamgCoarseSpace.c#L123</a><br>
> <br>
> I had a hard time getting the embedded solver to work the way I wanted, but maybe that is the better way.<br>
> <br>
> Thanks,<br>
> <br>
> Matt<br>
> <br>
> thanks for any advice<br>
> best wishes<br>
> Florian<br>
> <br>
> <br>
> -- <br>
> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>
> -- Norbert Wiener<br>
> <br>
> <a href="https://www.cse.buffalo.edu/~knepley/" rel="noreferrer" target="_blank">https://www.cse.buffalo.edu/~knepley/</a><br>
<br>
</blockquote></div><br clear="all"><div><br></div>-- <br><div dir="ltr"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener</div><div><br></div><div><a href="http://www.cse.buffalo.edu/~knepley/" target="_blank">https://www.cse.buffalo.edu/~knepley/</a><br></div></div></div></div></div></div></div></div>
</blockquote></div></div>
</blockquote></div><br clear="all"><div><br></div>-- <br><div dir="ltr" class="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener</div><div><br></div><div><a href="http://www.cse.buffalo.edu/~knepley/" target="_blank">https://www.cse.buffalo.edu/~knepley/</a><br></div></div></div></div></div></div></div></div>