[petsc-users] using preconditioner with SLEPc

Dave May dave.mayhem23 at gmail.com
Mon Feb 8 10:41:42 CST 2021


On Mon 8. Feb 2021 at 17:40, Dave May <dave.mayhem23 at gmail.com> wrote:

>
>
> On Mon 8. Feb 2021 at 15:49, Matthew Knepley <knepley at gmail.com> wrote:
>
>> On Mon, Feb 8, 2021 at 9:37 AM Jose E. Roman <jroman at dsic.upv.es> wrote:
>>
>>> The problem can be written as A0*v=omega*B0*v and you want the
>>> eigenvalues omega closest to zero. If the matrices were explicitly
>>> available, you would do shift-and-invert with target=0, that is
>>>
>>>   (A0-sigma*B0)^{-1}*B0*v=theta*v    for sigma=0, that is
>>>
>>>   A0^{-1}*B0*v=theta*v
>>>
>>> and you compute EPS_LARGEST_MAGNITUDE eigenvalues theta=1/omega.
>>>
>>> Matt: I guess you should have EPS_LARGEST_MAGNITUDE instead of
>>> EPS_SMALLEST_REAL in your code. Are you getting the eigenvalues you need?
>>> EPS_SMALLEST_REAL will give slow convergence.
>>>
>>
>> Thanks Jose! I am not understanding some step. I want the smallest
>> eigenvalues. Should I use EPS_SMALLEST_MAGNITUDE? I appear to get what I
>> want
>> using SMALLEST_REAL, but as you say it might be slower than it has to be.
>>
>
>
> With shift-and-invert you want to use EPS_LARGEST_MAGNITUDE as Jose says.
> The largest magnitude v
>


Sorry “v” should be “theta”!

eigenvalues you obtain (see Jose equation above) from the transformed
> system correspond to the smallest magnitude omega eigenvalues of the
> original problem.
>
> Cheers
> Dave
>
>
>> Also, sometime I would like to talk about incorporating the multilevel
>> eigensolver. I am sure you could make lots of improvements to my initial
>> attempt. I will send
>> you a separate email, since I am getting serious about testing it.
>>
>>   Thanks,
>>
>>      Matt
>>
>>
>>> Florian: I would not recommend setting the KSP matrices directly, it may
>>> produce strange side-effects. We should have an interface function to pass
>>> this matrix. Currently there is STPrecondSetMatForPC() but it has two
>>> problems: (1) it is intended for STPRECOND, so cannot be used with
>>> Krylov-Schur, and (2) it is not currently available in the python interface.
>>>
>>> The approach used by Matt is a workaround that does not use ST, so you
>>> can handle linear solves with a KSP of your own.
>>>
>>> As an alternative, since your problem is symmetric, you could try
>>> LOBPCG, assuming that the leftmost eigenvalues are those that you want
>>> (e.g. if all eigenvalues are non-negative). In that case you could use
>>> STPrecondSetMatForPC(), but the remaining issue is calling it from python.
>>>
>>> If you are using the git repo, I could add the relevant code.
>>>
>>> Jose
>>>
>>>
>>>
>>> > El 8 feb 2021, a las 14:22, Matthew Knepley <knepley at gmail.com>
>>> escribió:
>>> >
>>> > On Mon, Feb 8, 2021 at 7:04 AM Florian Bruckner <e0425375 at gmail.com>
>>> wrote:
>>> > Dear PETSc / SLEPc Users,
>>> >
>>> > my question is very similar to the one posted here:
>>> >
>>> https://lists.mcs.anl.gov/pipermail/petsc-users/2018-August/035878.html
>>> >
>>> > The eigensystem I would like to solve looks like:
>>> > B0 v = 1/omega A0 v
>>> > B0 and A0 are both hermitian, A0 is positive definite, but only given
>>> as a linear operator (matshell). I am looking for the largest eigenvalues
>>> (=smallest omega).
>>> >
>>> > I also have a sparse approximation P0 of the A0 operator, which i
>>> would like to use as precondtioner, using something like this:
>>> >
>>> >         es = SLEPc.EPS().create(comm=fd.COMM_WORLD)
>>> >         st = es.getST()
>>> >         ksp = st.getKSP()
>>> >         ksp.setOperators(self.A0, self.P0)
>>> >
>>> > Unfortunately PETSc still complains that it cannot create a
>>> preconditioner for a type 'python' matrix although P0.type == 'seqaij' (but
>>> A0.type == 'python').
>>> > By the way, should P0 be an approximation of A0 or does it have to
>>> include B0?
>>> >
>>> > Right now I am using the krylov-schur method. Are there any
>>> alternatives if A0 is only given as an operator?
>>> >
>>> > Jose can correct me if I say something wrong.
>>> >
>>> > When I did this, I made a shell operator for the action of A0^{-1} B0
>>> which has a KSPSolve() in it, so you can use your P0 preconditioning
>>> matrix, and
>>> > then handed that to EPS. You can see me do it here:
>>> >
>>> >
>>> https://gitlab.com/knepley/bamg/-/blob/master/src/coarse/bamgCoarseSpace.c#L123
>>> >
>>> > I had a hard time getting the embedded solver to work the way I
>>> wanted, but maybe that is the better way.
>>> >
>>> >   Thanks,
>>> >
>>> >      Matt
>>> >
>>> > thanks for any advice
>>> > best wishes
>>> > Florian
>>> >
>>> >
>>> > --
>>> > What most experimenters take for granted before they begin their
>>> experiments is infinitely more interesting than any results to which their
>>> experiments lead.
>>> > -- Norbert Wiener
>>> >
>>> > https://www.cse.buffalo.edu/~knepley/
>>>
>>>
>>
>> --
>> What most experimenters take for granted before they begin their
>> experiments is infinitely more interesting than any results to which their
>> experiments lead.
>> -- Norbert Wiener
>>
>> https://www.cse.buffalo.edu/~knepley/
>> <http://www.cse.buffalo.edu/~knepley/>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20210208/a6fc70c2/attachment-0001.html>


More information about the petsc-users mailing list