[petsc-dev] [petsc-users] gamg failure with petsc-dev
Mark Adams
mfadams at lbl.gov
Wed Aug 20 13:34:25 CDT 2014
Quick notes: you need to give the true null space as a "near null space"
also. The two methods don't try to talk to each other. IT sounds like you
are doing 2D so you want to add 3 near null space vectors. Also, its not
clear what interlaced is. AMG really wants the order to be not makor and
not equation major.
Mark
On Wed, Aug 20, 2014 at 11:27 AM, Stephan Kramer <s.kramer at imperial.ac.uk>
wrote:
> Thanks, Mark and Jed for your comments. I have tried improving the
> eigenvalue bounds but that didn't seem to help much (the eigenvalues didn't
> change much either). Largest eigenvalues are indeed in the range 2-3. I
> also tried with richardson+sor smoothing instead of chebychev+sor and there
> I'm seeing the same thing: without calling MatSetBlockSizes() it converges
> in 57 iterations, if I do set the block size it takes an order more
> iterations (~600). So it looks like the problem is not in the eigenvalue
> estimates. Note that it is exactly the same matrix (so interlaced in both
> cases), only difference is me commenting out the calls to
> MatSetBlockSizes().
>
> The problem we're solving is a Stokes velocity solve in 2D in a
> cylindrical (annulus) domain with rapidly varying viscosity. The thing I
> should probably have mentioned before is that it's applying free slip on
> the sides, so the system is ill-posed. We supply a near-null space with the
> 3 different modes but also set a "true" nullspace (KSPSetNullSpace) with
> only the rotational mode. We did test the supplied vectors indeed using
> MatNullSpaceTest.
>
> It's quite possible of course we have a bug - so we'll continue
> investigating. On the other hand the performance without setting the block
> size is good so we're happy to continue with that as well. What I actually
> wanted to ask about - the previous was actually a bit tangent to that - is
> that we'd be quite keen to get a fix for the "zero-pivot on coarsened
> levels" problem merged in (e.g. the mark/gamg-zerod branch) as we have had
> better performance with cheby+sor than cheby+jacobi in cases we've looked
> at (this includes other cases than the above). Let us know if there's any
> way we can be helpful with that.
>
> Cheers
> Stephan Kramer
>
>
>
> On 17/08/14 16:10, Mark Adams wrote:
>
>>
>>
>>
>> The most common cause of this is inaccurate eigenvalue bounds. You
>> can
>> try using more iterations for estimation.
>>
>>
>> I have also found that CG converges much faster than GMRES (default) so
>> if you are SPD, I would always use: '-gamg_est_ksp_type cg' . Also,
>> '-pc_gamg_verbose 2' will print out the eigenvalues that GAMG computes
>> for smoothing. With -ksp_view you will see the eigenvalues that Cheby
>> is using (GAMG sets these so they should be the same if you use GAMG
>> with pc_gamg_nsmooths>0). The largest eigenvalue should be around 2-3.
>> If it is too low (<2) then the instability that Jed is referring to
>> would be the problem.
>>
>> You can also use more iterations, and see if it has converged, with
>> -gamg_est_ksp_max_it N. I think the default is 5.
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20140820/d1e4406e/attachment.html>
More information about the petsc-dev
mailing list