[petsc-dev] strangness in Chebyshev estimate of eigenvalues

Matthew Knepley knepley at gmail.com
Tue Aug 25 17:34:35 CDT 2015


On Tue, Aug 25, 2015 at 5:09 PM, Barry Smith <bsmith at mcs.anl.gov> wrote:

>
> > On Aug 25, 2015, at 4:50 PM, Matthew Knepley <knepley at gmail.com> wrote:
> >
> > On Tue, Aug 25, 2015 at 4:44 PM, Barry Smith <bsmith at mcs.anl.gov> wrote:
> >
> > > On Aug 25, 2015, at 3:22 PM, Matthew Knepley <knepley at gmail.com>
> wrote:
> > >
> > > On Tue, Aug 25, 2015 at 12:13 PM, Mark Adams <mfadams at lbl.gov> wrote:
> > >
> > >
> > > On Sat, Aug 22, 2015 at 10:39 PM, Barry Smith <bsmith at mcs.anl.gov>
> wrote:
> > >
> > > > On Aug 22, 2015, at 9:26 PM, Mark Adams <mfadams at lbl.gov> wrote:
> > > >
> > > > Good point.  I can not see any reason to use the initial guess for
> the eigen estimate.
> > >
> > >    Why not, won't it better select for the eigen space actually seen
> by the linear solver since the linear solver starts with that guess? I am
> just making this up because I haven't looked how the initial guess affects
> eigenanalysis for Chebyshev  it but ...
> > >
> > > I've not looked at it, and it would be hard to because it is not a
> well defined problem.  But, if the initial guess is low frequency then it
> will give a poor estimate for the highest eigen value.  It is not clear to
> me what the relationship is, generally, of an initial guess and the
> solution, spectrally.  Initial guesses will change as the problem evolves
> but we don't update the eigen estimates.  If the user's initial guess
> happens to be zero then god knows what happens. (This is actually the case
> for the XGC1 code!!!)  It adds one more variable in debugging AMG, which is
> hard enough as it is.
> > >
> > >
> > > >  I would vote for (1).
> > > >
> > > > Also, I hope cheb->random is the default.
> > >
> > >    Well then different machines will produce different convergence
> histories which is annoying for any kind of "no change" daily testing.
> Except for you, most of the rest of us don't like the random default, sorry
> :-)
> > >
> > > We can use the determinate random number that I added to GAMG (eg,
> v(i) = ((double)(gid(i)*51)%100) - 50.)/50.
> > >
> > > Wait, why are you doing this? Why not just create a PetscRandom and
> set the seed?
> >
> >    Yes, he should create a new PetscRandom implementation called for
> example deterministic that would do some simple minded thing. All the
> current PetscRandom implementations might create different numbers with
> different OS, or versions of OS thus making "no change" tests impossible.
> >
> > This makes no sense. drand48() is a 48-bit linear congruential random
> number generator. It should never return a different value given the same
> seed.
>
> From the makefile
>
>    #requiresfunction 'PETSC_HAVE_DRAND48'
>
> It appears that not all systems have drand48(). So it is still an issue of
> making sure we have a way of generating the exact same numbers on all
> systems.


drand48() is POSIX, and its a 1 line algorithm. If you can dig up a system
on which it is missing, I will implement a stopgap.

   Matt


>
>   Barry
>
>
>
> >
> >    Matt
> >
> >    Barry
> >
> > >
> > >    Matt
> > >
> > >
> > > >  One of my apps uses a zero RHS for the first solve, just because
> they did not care about adding some logic like: if (.not. first_solve)
> solve()  Using a zero RHS would be catastrophic.
> > >
> > >    There code be a check that rhs norm is zero (cost of a global
> reduction?) and then use a nonzero initial guess to do the eigenanalysis
> > >
> > > Too complicated for little gain, or loss even, and requires a
> reduction.
> > >
> > >
> > > >  I trust this is true, because the code works, but we should make
> sure.  And perhaps Cheby should check that that KSPSolve did all of its
> iterations (ie, DIVERGE_ITS, or whatever).  Getting this wrong leads to
> silent errors that are a pain to debug.
> > >
> > >    Good point, the KSPChebyshevComputeExtremeEigenvalues_Private()
> routine should check that n returned from  KSPGetIterationNumber() is not
> zero etc.
> > >
> > >   Barry
> > >
> > > So I think we should:
> > >
> > > 1) set the initial guess with my new sort of random number (we don't
> need good quality random number here)
> > > 2) add the check for DIVERGE_ITS, or something, in
> KSPChebyshevComputeExtremeEigenvalues_Private.  Should it stop?  Probably.
> > >
> > > Mark
> > >
> > >
> > > >
> > > > I can do this.
> > > >
> > > > Mark
> > > >
> > > >
> > > >
> > > > On Sat, Aug 22, 2015 at 6:35 PM, Barry Smith <bsmith at mcs.anl.gov>
> wrote:
> > > >
> > > >    From KSPSolve_Chebyshev()
> > > >
> > > >      X = ksp->work[0];
> > > >       if (cheb->random) {
> > > >         B    = ksp->work[1];
> > > >         ierr = VecSetRandom(B,cheb->random);CHKERRQ(ierr);
> > > >       } else {
> > > >         B = ksp->vec_rhs;
> > > >       }
> > > >       ierr = KSPSolve(cheb->kspest,B,X);CHKERRQ(ierr);
> > > >
> > > >       if (ksp->guess_zero) {
> > > >         ierr = VecZeroEntries(X);CHKERRQ(ierr);
> > > >       }
> > > >       ierr =
> KSPChebyshevComputeExtremeEigenvalues_Private(cheb->kspest,&min,&max);CHKERRQ(ierr);
> > > >
> > > >    This seems to do strange stuff with the initial guess for the
> eigenanalysis. ksp->work[0] is a work vector used within the Chebyshev
> algorithm, so at this point in the code it will have just whatever stuff it
> had in it from a previous Chebyshev solver or a zero the first time
> through. It seems bad to use this vector as the initial guess for
> estimator. Then AFTER the KSPSolve() it zeros  ksp->work[0], sometimes? If
> the original system being solved has zero initial guess, even though the
> values in X will not be used again. WTF?
> > > >
> > > >    Shouldn't the code either
> > > >
> > > > 1) zero X = ksp->work[0] everytime BEFORE the KSPSolve() or
> > > > 2) zero X if ksp->guess_zero and otherwise copy into X the initial
> guess vec_sol from the caller before computing the eigenvalues to use that
> initial guess in estimating the eigenvalues?
> > > >
> > > >   Barry
> > > >
> > > >
> > >
> > >
> > >
> > >
> > >
> > > --
> > > What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> > > -- Norbert Wiener
> >
> >
> >
> >
> > --
> > What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> > -- Norbert Wiener
>
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20150825/74b05ef9/attachment.html>


More information about the petsc-dev mailing list