[petsc-users] SLEPc: smallest eigenvalues

Jose E. Roman jroman at dsic.upv.es
Tue Sep 21 13:14:57 CDT 2021


I will have a look at your code when I have more time. Meanwhile, I am answering 3) below...

> El 21 sept 2021, a las 0:23, Varun Hiremath <varunhiremath at gmail.com> escribió:
> 
> Hi Jose,
> 
> Sorry, it took me a while to test these settings in the new builds. I am getting good improvement in performance using the preconditioned solvers, so thanks for the suggestions! But I have some questions related to the usage.
> 
> We are using SLEPc to solve the acoustic modal eigenvalue problem. Attached is a simple standalone program that computes acoustic modes in a simple rectangular box. This program illustrates the general setup I am using, though here the shell matrix and the preconditioner matrix are the same, while in my actual program the shell matrix computes A*x without explicitly forming A, and the preconditioner is a 0th order approximation of A.
> 
> In the attached program I have tested both
> 1) the Krylov-Schur with inexact shift-and-invert (implemented under the option sinvert);
> 2) the JD solver with preconditioner (implemented under the option usejd)
> 
> Both the solvers seem to work decently, compared to no preconditioning. This is how I run the two solvers (for a mesh size of 1600x400):
> $ ./acoustic_box_test.o -nx 1600 -ny 400 -usejd 1 -deflate 1 -eps_target 0
> $ ./acoustic_box_test.o -nx 1600 -ny 400 -sinvert 1 -deflate 1 -eps_target 0
> Both finish in about ~10 minutes on my system in serial. JD seems to be slightly faster and more accurate (for the imaginary part of eigenvalue).
> The program also runs in parallel using mpiexec. I use complex builds, as in my main program the matrix can be complex.
> 
> Now here are my questions:
> 1) For this particular problem type, could you please check if these are the best settings that one could use? I have tried different combinations of KSP/PC types e.g. GMRES, GAMG, etc, but BCGSL + BJACOBI seems to work the best in serial and parallel.
> 
> 2) When I tested these settings in my main program, for some reason the JD solver was not converging. After further testing, I found the issue was related to the setting of "-eps_target 0". I have included "EPSSetTarget(eps,0.0);" in the program and I assumed this is equivalent to passing "-eps_target 0" from the command line, but that doesn't seem to be the case. For instance, if I run the attached program without "-eps_target 0" in the command line then it doesn't converge.
> $ ./acoustic_box_test.o -nx 1600 -ny 400 -usejd 1 -deflate 1 -eps_target 0
>  the above finishes in about 10 minutes
> $ ./acoustic_box_test.o -nx 1600 -ny 400 -usejd 1 -deflate 1
>  the above doesn't converge even though "EPSSetTarget(eps,0.0);" is included in the code
> 
> This only seems to affect the JD solver, not the Krylov shift-and-invert (-sinvert 1) option. So is there any difference between passing "-eps_target 0" from the command line vs using "EPSSetTarget(eps,0.0);" in the code? I cannot pass any command line arguments in my actual program, so need to set everything internally.
> 
> 3) Also, another minor related issue. While using the inexact shift-and-invert option, I was running into the following error:
> 
> ""
> Missing or incorrect user input
> Shift-and-invert requires a target 'which' (see EPSSetWhichEigenpairs), for instance -st_type sinvert -eps_target 0 -eps_target_magnitude
> ""
> 
> I already have the below two lines in the code:
> EPSSetWhichEigenpairs(eps,EPS_SMALLEST_MAGNITUDE);
> EPSSetTarget(eps,0.0);
> 
> so shouldn't these be enough? If I comment out the first line "EPSSetWhichEigenpairs", then the code works fine.

You should either do

EPSSetWhichEigenpairs(eps,EPS_SMALLEST_MAGNITUDE);

without shift-and-invert or

EPSSetWhichEigenpairs(eps,EPS_TARGET_MAGNITUDE);
EPSSetTarget(eps,0.0);

with shift-and-invert. The latter can also be used without shift-and-invert (e.g. in JD).

I have to check, but a possible explanation why in your comment above (2) the command-line option -eps_target 0 works differently is that it also sets -eps_target_magnitude if omitted, so to be equivalent in source code you have to call both
EPSSetWhichEigenpairs(eps,EPS_TARGET_MAGNITUDE);
EPSSetTarget(eps,0.0);

Jose

> I have some more questions regarding setting the preconditioner for a quadratic eigenvalue problem, which I will ask in a follow-up email.
> 
> Thanks for your help!
> 
> -Varun
> 
> 
> On Thu, Jul 1, 2021 at 5:01 AM Varun Hiremath <varunhiremath at gmail.com> wrote:
> Thank you very much for these suggestions! We are currently using version 3.12, so I'll try to update to the latest version and try your suggestions. Let me get back to you, thanks!
> 
> On Thu, Jul 1, 2021, 4:45 AM Jose E. Roman <jroman at dsic.upv.es> wrote:
> Then I would try Davidson methods https://doi.org/10.1145/2543696
> You can also try Krylov-Schur with "inexact" shift-and-invert, for instance, with preconditioned BiCGStab or GMRES, see section 3.4.1 of the users manual.
> 
> In both cases, you have to pass matrix A in the call to EPSSetOperators() and the preconditioner matrix via STSetPreconditionerMat() - note this function was introduced in version 3.15.
> 
> Jose
> 
> 
> 
> > El 1 jul 2021, a las 13:36, Varun Hiremath <varunhiremath at gmail.com> escribió:
> > 
> > Thanks. I actually do have a 1st order approximation of matrix A, that I can explicitly compute and also invert. Can I use that matrix as preconditioner to speed things up? Is there some example that explains how to setup and call SLEPc for this scenario? 
> > 
> > On Thu, Jul 1, 2021, 4:29 AM Jose E. Roman <jroman at dsic.upv.es> wrote:
> > For smallest real parts one could adapt ex34.c, but it is going to be costly https://slepc.upv.es/documentation/current/src/eps/tutorials/ex36.c.html
> > Also, if eigenvalues are clustered around the origin, convergence may still be very slow.
> > 
> > It is a tough problem, unless you are able to compute a good preconditioner of A (no need to compute the exact inverse).
> > 
> > Jose
> > 
> > 
> > > El 1 jul 2021, a las 13:23, Varun Hiremath <varunhiremath at gmail.com> escribió:
> > > 
> > > I'm solving for the smallest eigenvalues in magnitude. Though is it cheaper to solve smallest in real part, as that might also work in my case? Thanks for your help.
> > > 
> > > On Thu, Jul 1, 2021, 4:08 AM Jose E. Roman <jroman at dsic.upv.es> wrote:
> > > Smallest eigenvalue in magnitude or real part?
> > > 
> > > 
> > > > El 1 jul 2021, a las 11:58, Varun Hiremath <varunhiremath at gmail.com> escribió:
> > > > 
> > > > Sorry, no both A and B are general sparse matrices (non-hermitian). So is there anything else I could try?
> > > > 
> > > > On Thu, Jul 1, 2021 at 2:43 AM Jose E. Roman <jroman at dsic.upv.es> wrote:
> > > > Is the problem symmetric (GHEP)? In that case, you can try LOBPCG on the pair (A,B). But this will likely be slow as well, unless you can provide a good preconditioner.
> > > > 
> > > > Jose
> > > > 
> > > > 
> > > > > El 1 jul 2021, a las 11:37, Varun Hiremath <varunhiremath at gmail.com> escribió:
> > > > > 
> > > > > Hi All,
> > > > > 
> > > > > I am trying to compute the smallest eigenvalues of a generalized system A*x= lambda*B*x. I don't explicitly know the matrix A (so I am using a shell matrix with a custom matmult function) however, the matrix B is explicitly known so I compute inv(B)*A within the shell matrix and solve inv(B)*A*x = lambda*x.
> > > > > 
> > > > > To compute the smallest eigenvalues it is recommended to solve the inverted system, but since matrix A is not explicitly known I can't invert the system. Moreover, the size of the system can be really big, and with the default Krylov solver, it is extremely slow. So is there a better way for me to compute the smallest eigenvalues of this system?
> > > > > 
> > > > > Thanks,
> > > > > Varun
> > > > 
> > > 
> > 
> 
> <acoustic_box_test.cpp>



More information about the petsc-users mailing list