[petsc-users] Matrix-free generalised eigenvalue problem

Quentin Chevalier quentin.chevalier at polytechnique.edu
Tue Jul 18 03:04:03 CDT 2023


My apologies I didn't think the previous message through - the
operation USV^H is far from 16 inner products and more like 1 M^2
inner products of length 4. I guess I should try to exploit sparsity
of U and V (CSR works in parallel ?) but create a dense R.

Cheers,

Quentin



Quentin CHEVALIER – IA parcours recherche

LadHyX - Ecole polytechnique

__________



On Tue, 18 Jul 2023 at 09:57, Quentin Chevalier
<quentin.chevalier at polytechnique.edu> wrote:
>
> Matrix to matrix products are taking much longer than expected... My snippet is below. m and n are quite large, >1M each. I'm running this on 35 procs. As you can see U, S and V are quite sparse SVD matrices (only their first 4 columns are dense, plus a chop). I expected therefore approximate R to have only rank 4 and computations to run smoothly once the cross products between U and V are computed... Right now my bottle neck is not preconditioner but getting that approximation of M2. What do you think ?
>
> def approximate(self,k:int):
> m, n = self.R.getSize()
> m_local,n_local = self.R.getLocalSize()
> I,J=self.tmp3.vector.getOwnershipRange()
> S=pet.Mat().create(comm=comm); S.setSizes([[n_local,n],[n_local,n]])
> U=pet.Mat().create(comm=comm); U.setSizes([[m_local,m],[n_local,n]])
> V=pet.Mat().create(comm=comm); V.setSizes([[n_local,n],[n_local,n]])
> for A in (U,S,V): A.setUp()
> g=np.loadtxt("gains/txt/dummy.txt")
> for i in range(k):
> S.setValue(i,i,g[i])
> loadStuff("left-vecs/", params,self.tmp3)
> U.setValues(self.reindex,[i],self.tmp3.x.array)
> loadStuff("right-vecs/",params,self.tmp3)
> V.setValues(range(I,J), [i],self.tmp3.vector)
> for A in (U,S,V): A.assemble()
> U.chop(1e-6); V.chop(1e-6)
>
> V.hermitianTranspose(V)
> self.P.matMult(U.matMult(S.matMult(V,S),U),U) # Multiplications in place (everyone is square except U)
> M2=self.N.duplicate()
> M2.matMult(U,U)
> U.hermitianTranspose().matMult(U,M2)
> return self.M+M2
>
> Quentin
>
>
> On Tue, 18 Jul 2023 at 07:48, Quentin Chevalier <quentin.chevalier at polytechnique.edu> wrote:
> >
> > Thank you for that pointer ! I have on hand a partial SVD of R, so I used that to build the approximate matrix instead.
> >
> > It's great that so many nice features of PETSc like STSetPreconditionerMat are accessible through petsc4py !
> >
> > Good day,
> >
> > Quentin
> >
> >
> >
> > On Mon, 17 Jul 2023 at 17:29, Jose E. Roman <jroman at dsic.upv.es> wrote:
> > >
> > > It is possible to pass a different matrix to build the preconditioner. That is, the shell matrix for B (EPSSetOperators) and an explicit matrix (that approximates B) for the preconditioner. For instance, you can try passing M for building the preconditioner. Since M is an explicit matrix, you can try the default preconditioner (block Jacobi with ILU as local solver) or even a full LU decomposition. The effectiveness of the preconditioner will depend on how the update M+R^H P M P R moves the eigenvalues around.
> > >
> > > You can do this with STSetSplitPreconditioner() or STSetPreconditionerMat(). In your case any of them will do.
> > >
> > > Jose
> > >
> > >
> > > > El 17 jul 2023, a las 15:50, Quentin Chevalier <quentin.chevalier at polytechnique.edu> escribió:
> > > >
> > > > Thank you for this suggestion, I tried to implement that but it's
> > > > proven pretty hard to implement MATOP_GET_DIAGONAL without completely
> > > > tanking performance. After all, B is a shell matrix for a reason : it
> > > > looks like M+R^H P M P R with R itself a shell matrix.
> > > >
> > > > Allow me to point out that I have no shift. My eigenvalue problem is
> > > > purely about the largest ones out there. Section 8.2 and 3.4.3 led me
> > > > to think that there was a way to avoid computing (or writing a shell
> > > > matrix about it) B^-1... But you seem to stress that there's no way
> > > > around it.
> > > >
> > > > Quentin
> > > >
> > > >
> > > >
> > > > On Mon, 17 Jul 2023 at 11:56, Jose E. Roman <jroman at dsic.upv.es> wrote:
> > > >>
> > > >> The B-inner product is independent of the ST operator. See Table 3.2. In generalized eigenproblems you always have an inverse.
> > > >>
> > > >> If your matrix is diagonally dominant, try implementing the MATOP_GET_DIAGONAL operation and using PCJACOBI. Apart from this, you have to build your own preconditioner.
> > > >>
> > > >> Jose
> > > >>
> > > >>
> > > >>> El 17 jul 2023, a las 11:48, Quentin Chevalier <quentin.chevalier at polytechnique.edu> escribió:
> > > >>>
> > > >>> Hello Jose,
> > > >>>
> > > >>> I guess I expected B to not be inverted but instead used as a mass for a problem-specific inner product since I specified GHEP as a problem type. p50 of the same user manual seems to imply that that would indeed be the case. I don't see what problem there would be with using a shell B matrix as a weighting matrix, as long as a mat utility is provided of course.
> > > >>>
> > > >>> I tried the first approach - I set up my KSP as CG since B is hermitian positive-definite (I made a mistake in my first email), but I'm getting a KSPSolve has not converged, reason DIVERGED_ITS error. I'm letting it run for 1000 iterations already so it seems suspiciously slow for a CG solver.
> > > >>>
> > > >>> I'm grappling with a shell preconditioner now to try and speed it up, but I'm unsure which one allows for shell matrices.
> > > >>>
> > > >>> Thank you for your time,
> > > >>>
> > > >>> Quentin
> > > >>>
> > > >>>
> > > >>> On Wed, 12 Jul 2023 at 19:24, Jose E. Roman <jroman at dsic.upv.es> wrote:
> > > >>>>
> > > >>>> By default, it is solving the problem as B^{-1}*A*x=lambda*x (see chapter on Spectral Transformation). That is why A can be a shell matrix without problem. But B needs to be an explicit matrix in order to compute an LU factorization. If B is also a shell matrix then you should set an iterative solver for the associated KSP (see examples in the chapter).
> > > >>>>
> > > >>>> An alternative is to create a shell matrix M that computes the action of B^{-1}*A, then pass M to the EPS solver as a standard eigenproblem.
> > > >>>>
> > > >>>> Jose
> > > >>>>
> > > >>>>
> > > >>>>> El 12 jul 2023, a las 19:04, Quentin Chevalier <quentin.chevalier at polytechnique.edu> escribió:
> > > >>>>>
> > > >>>>> Hello PETSc Users,
> > > >>>>>
> > > >>>>> I have a generalised eigenvalue problem : Ax= lambda Bx
> > > >>>>> I used to have only A as a matrix-free method, I used mumps and an LU preconditioner, everything worked fine.
> > > >>>>>
> > > >>>>> Now B is matrix-free as well, and my solver is returning an error : "MatSolverType mumps does not support matrix type python", which is ironic given it seem to handle A quite fine.
> > > >>>>>
> > > >>>>> I have read in the user manual here that there some methods may require additional methods to be supplied for B like MATOP_GET_DIAGONAL but it's unclear to me exactly what I should be implementing and what is the best solver for my case.
> > > >>>>>
> > > >>>>> A is hermitian, B is hermitian positive but not positive-definite or real. Therefore I have specified a GHEP problem type to the EPS object.
> > > >>>>>
> > > >>>>> I use PETSc in complex mode through the petsc4py bridge.
> > > >>>>>
> > > >>>>> Any help on how to get EPS to work for a generalised matrix-free case would be welcome. Performance is not a key issue here - I have a tractable high value case on hand.
> > > >>>>>
> > > >>>>> Thank you for your time,
> > > >>>>>
> > > >>>>> Quentin
> > > >>>>
> > > >>
> > >


More information about the petsc-users mailing list