[petsc-users] Slepc JD and GD converge to wrong eigenpair

Toon Weyens toon.weyens at gmail.com
Fri Mar 31 17:03:22 CDT 2017


Sorry, I forgot to add the download link for the matrix files:
https://transfer.pcloud.com/download.html?code=5ZViHIZI96yPIODHYSZ7y1HZMloBfcyhAHunjQVMpWUJIykLt76k

Thanks

On Sat, Apr 1, 2017 at 12:01 AM Toon Weyens <toon.weyens at gmail.com> wrote:

> Dear jose,
>
> I have saved the matrices in Matlab format and am sending them to you
> using pCloud. If you want another format, please tell me. Please also note
> that they are about 1.4GB each.
>
> I also attach a typical output of eps_view and log_view in output.txt, for
> 8 processes.
>
> Thanks so much for helping me out! I think Petsc and Slepc are amazing
> inventions that really have saved me many months of work!
>
> Regards
>
> On Fri, Mar 31, 2017 at 5:12 PM Jose E. Roman <jroman at dsic.upv.es> wrote:
>
> In order to answer about GD I would need to know all the settings you are
> using. Also if you could send me the matrix I could do some tests.
> GD and JD are preconditioned eigensolvers, which need a reasonably good
> preconditioner. But MUMPS is a direct solver, not a preconditioner, and
> that is often counterproductive in this kind of methods.
> Jose
>
>
> > El 31 mar 2017, a las 16:45, Toon Weyens <toon.weyens at gmail.com>
> escribió:
> >
> > Dear both,
> >
> > I have recompiled slepc and petsc without debugging, as well as with the
> recommended --with-fortran-kernels=1. In the attachment I show the scaling
> for a typical "large" simulation with about 120 000 unkowns, using
> Krylov-Schur.
> >
> > There are two sets of datapoints there, as I do two EPS solves in one
> simulations. The second solve is faster as it results from a grid
> refinement of the first solve, and takes the solution of the first solve as
> a first, good guess. Note that there are two pages in the PDF and in the
> second page I show the time · n_procs.
> >
> > As you can see, the scaling is better than before, especially up to 8
> processes (which means about 15,000 unknowns per process, which is, as I
> recall, cited as a good minimum on the website.
> >
> > I am currently trying to run  make streams NPMAX=8, but the cluster is
> extraordinarily crowded today and it does not like my interactive jobs. I
> will try to run them asap.
> >
> > The main issue now, however, is again the first issue: the Generalizeid
> Davidson method does not converge to the physically correct negative
> eigenvalue (it should be about -0.05 as Krylov-Schur gives me). In stead it
> stays stuck at some small positive eigenvalue of about +0.0002. It looks as
> if the solver really does not like passing the eigenvalue = 0 barrier, a
> behavior I also see in smaller simulations, where the convergence is
> greatly slowed down when crossing this.
> >
> > However, this time, for this big simulation, just increasing NCV does
> not do the trick, at least not until NCV=2048.
> >
> > Also, I tried to use target magnitude without success either.
> >
> > I started implementing the capability to start with Krylov-Schur and
> then switch to GD with EPSSetInitialSpace when a certain precision has been
> reached, but then realized it might be a bit of overkill as the SLEPC
> solution phase in my code is generally not more than 15% of the time. There
> are probably other places where I can gain more than a few percents.
> >
> > However, if there is another trick that can make GD to work, it would
> certainly be appreciated, as in my experience it is really about 5 times
> faster than Krylov-Schur!
> >
> > Thanks!
> >
> > Toon
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20170331/584c3dc5/attachment.html>


More information about the petsc-users mailing list