[petsc-users] Select a preconditioner for SLEPc eigenvalue solver Jacobi-Davidson
Jose E. Roman
jroman at dsic.upv.es
Tue Nov 5 11:33:13 CST 2019
JD sets STPRECOND at EPSSetUp(), if it was not set before. So I guess you need to add -st_type precond on the command line, so that it is set at EPSSetFromOptions().
Jose
> El 5 nov 2019, a las 18:13, Fande Kong <fdkong.jd at gmail.com> escribió:
>
> How about I want to determine the ST type on runtime?
>
> mpirun -n 1 ./ex3 -eps_type jd -st_ksp_type gmres -st_pc_type none -eps_view -eps_target 0 -eps_monitor -st_ksp_monitor
>
> ST is indeed STPrecond, but the passed preconditioning matrix is still ignored.
>
> EPS Object: 1 MPI processes
> type: jd
> search subspace is orthogonalized
> block size=1
> type of the initial subspace: non-Krylov
> size of the subspace after restarting: 6
> number of vectors after restarting from the previous iteration: 1
> threshold for changing the target in the correction equation (fix): 0.01
> problem type: symmetric eigenvalue problem
> selected portion of the spectrum: closest to target: 0. (in magnitude)
> number of eigenvalues (nev): 1
> number of column vectors (ncv): 17
> maximum dimension of projected problem (mpd): 17
> maximum number of iterations: 1700
> tolerance: 1e-08
> convergence test: relative to the eigenvalue
> BV Object: 1 MPI processes
> type: svec
> 17 columns of global length 100
> vector orthogonalization method: classical Gram-Schmidt
> orthogonalization refinement: if needed (eta: 0.7071)
> block orthogonalization method: GS
> doing matmult as a single matrix-matrix product
> DS Object: 1 MPI processes
> type: hep
> solving the problem with: Implicit QR method (_steqr)
> ST Object: 1 MPI processes
> type: precond
> shift: 0.
> number of matrices: 1
> KSP Object: (st_) 1 MPI processes
> type: gmres
> restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement
> happy breakdown tolerance 1e-30
> maximum iterations=90, initial guess is zero
> tolerances: relative=0.0001, absolute=1e-50, divergence=10000.
> left preconditioning
> using PRECONDITIONED norm type for convergence test
> PC Object: (st_) 1 MPI processes
> type: none
> linear system matrix = precond matrix:
> Mat Object: 1 MPI processes
> type: shell
> rows=100, cols=100
> Solution method: jd
>
>
> Preconding matrix should be a SeqAIJ not shell.
>
>
> Fande,
>
> On Tue, Nov 5, 2019 at 9:07 AM Jose E. Roman <jroman at dsic.upv.es> wrote:
> Currently, the function that passes the preconditioner matrix is specific of STPRECOND, so you have to add
> ierr = STSetType(st,STPRECOND);CHKERRQ(ierr);
> before
> ierr = STPrecondSetMatForPC(st,B);CHKERRQ(ierr);
> otherwise this latter call is ignored.
>
> We may be changing a little bit the way in which ST is initialized, and maybe we modify this as well. It is not decided yet.
>
> Jose
>
>
> > El 5 nov 2019, a las 0:28, Fande Kong <fdkong.jd at gmail.com> escribió:
> >
> > Thanks Jose,
> >
> > I think I understand now. Another question: what is the right way to setup a linear preconditioning matrix for the inner linear solver of JD?
> >
> > I was trying to do something like this:
> >
> > /*
> > Create eigensolver context
> > */
> > ierr = EPSCreate(PETSC_COMM_WORLD,&eps);CHKERRQ(ierr);
> >
> > /*
> > Set operators. In this case, it is a standard eigenvalue problem
> > */
> > ierr = EPSSetOperators(eps,A,NULL);CHKERRQ(ierr);
> > ierr = EPSSetProblemType(eps,EPS_HEP);CHKERRQ(ierr);
> > ierr = EPSGetST(eps,&st);CHKERRQ(ierr);
> > ierr = STPrecondSetMatForPC(st,B);CHKERRQ(ierr);
> >
> > /*
> > Set solver parameters at runtime
> > */
> > ierr = EPSSetFromOptions(eps);CHKERRQ(ierr);
> >
> > /* - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
> > Solve the eigensystem
> > - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - */
> >
> > ierr = EPSSolve(eps);CHKERRQ(ierr);
> >
> >
> > But did not work. A complete example is attached. I could try to dig into the code, but you may already know the answer.
> >
> >
> > On Wed, Oct 23, 2019 at 3:58 AM Jose E. Roman <jroman at dsic.upv.es> wrote:
> > Yes, it is confusing. Here is the explanation: when you use a target, the preconditioner is built from matrix A-sigma*B. By default, instead of TARGET_MAGNITUDE we set LARGEST_MAGNITUDE, and in Jacobi-Davidson we treat this case by setting sigma=PETSC_MAX_REAL. In this case, the preconditioner is built from matrix B. The thing is that in a standard eigenproblem we have B=I, and hence there is no point in using a preconditioner, that is why we set PCNONE.
> >
> > Jose
> >
> >
> > > El 22 oct 2019, a las 19:57, Fande Kong via petsc-users <petsc-users at mcs.anl.gov> escribió:
> > >
> > > Hi All,
> > >
> > > It looks like the preconditioner is hard-coded in the Jacobi-Davidson solver. I could not select a preconditioner rather than the default setting.
> > >
> > > For example, I was trying to select LU, but PC NONE was still used. I ran standard example 2 in slepc/src/eps/examples/tutorials, and had the following results.
> > >
> > >
> > > Thanks,
> > >
> > > Fande
> > >
> > >
> > > ./ex2 -eps_type jd -st_ksp_type gmres -st_pc_type lu -eps_view
> > >
> > > 2-D Laplacian Eigenproblem, N=100 (10x10 grid)
> > >
> > > EPS Object: 1 MPI processes
> > > type: jd
> > > search subspace is orthogonalized
> > > block size=1
> > > type of the initial subspace: non-Krylov
> > > size of the subspace after restarting: 6
> > > number of vectors after restarting from the previous iteration: 1
> > > threshold for changing the target in the correction equation (fix): 0.01
> > > problem type: symmetric eigenvalue problem
> > > selected portion of the spectrum: largest eigenvalues in magnitude
> > > number of eigenvalues (nev): 1
> > > number of column vectors (ncv): 17
> > > maximum dimension of projected problem (mpd): 17
> > > maximum number of iterations: 1700
> > > tolerance: 1e-08
> > > convergence test: relative to the eigenvalue
> > > BV Object: 1 MPI processes
> > > type: svec
> > > 17 columns of global length 100
> > > vector orthogonalization method: classical Gram-Schmidt
> > > orthogonalization refinement: if needed (eta: 0.7071)
> > > block orthogonalization method: GS
> > > doing matmult as a single matrix-matrix product
> > > DS Object: 1 MPI processes
> > > type: hep
> > > solving the problem with: Implicit QR method (_steqr)
> > > ST Object: 1 MPI processes
> > > type: precond
> > > shift: 1.79769e+308
> > > number of matrices: 1
> > > KSP Object: (st_) 1 MPI processes
> > > type: gmres
> > > restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement
> > > happy breakdown tolerance 1e-30
> > > maximum iterations=90, initial guess is zero
> > > tolerances: relative=0.0001, absolute=1e-50, divergence=10000.
> > > left preconditioning
> > > using PRECONDITIONED norm type for convergence test
> > > PC Object: (st_) 1 MPI processes
> > > type: none
> > > linear system matrix = precond matrix:
> > > Mat Object: 1 MPI processes
> > > type: shell
> > > rows=100, cols=100
> > > Solution method: jd
> > >
> > > Number of requested eigenvalues: 1
> > > Linear eigensolve converged (1 eigenpair) due to CONVERGED_TOL; iterations 20
> > > ---------------------- --------------------
> > > k ||Ax-kx||/||kx||
> > > ---------------------- --------------------
> > > 7.837972 7.71944e-10
> > > ---------------------- --------------------
> > >
> > >
> > >
> >
> > <ex3.c>
>
More information about the petsc-users
mailing list