[petsc-dev] [petsc-users] gamg failure with petsc-dev

Jed Brown jed at jedbrown.org
Wed Oct 29 15:06:03 CDT 2014


Stephan Kramer <s.kramer at imperial.ac.uk> writes:

> On 18/09/14 15:41, Mark Adams wrote:
>> Thanks for updating me Stephan,
>>
>> This is a common problem.  I'm surprised we have not seen it more.
>>
>> Note, the problem with the solver dying when MatSetBlockSize is set
>> (correctly).  Is some sort of bug.  You have Stokes and use FieldSplit,
>> so GAMG is working on a 2D elasticity like problem. Is that correct?
>>   And the equations are ordered node first:  [v0_x, v0_y, v1_x, v1_y,
>> v2_x, ....]  correct?
>
> Correct - although we're not using FieldSplit because of historic 
> reasons (the code predates fieldsplit) - but yes, it's the velocity 
> block K of a Stokes system (K G; G^T 0) with variable viscosity.
>
> I finally figured out why the convergence was so much worse if we do set 
> the block size: It turns out to do with the scaling of the number I put 
> on the diagonal for the eliminated DOFs associated with strong bcs. This 
> problem has free slip bcs and I eliminate the DOF associated with the 
> normal component by zeroing out the row and column and putting an 
> arbitrary number on the diagonal (I chose 1.0). 

Are the slip surfaces always aligned with the coordinate axes or are you
rotating the vertex coordinate system to line up?  If the latter, is
that rotation accounted for when defining the near-null space?

> Normally the number doesn't really matter as the row is completely
> decoupled from the rest of the system. However in this case, when
> using gamg with blocks, it affects the strong coupling criterion for
> *both* velocity components associated with the boundary node. This
> criterion is scaled with respect to the 1-norm of the diagonal block,
> thus if the diagonal for the normal component is chosen too big it
> also declares all connections to the dof associated with the
> tangential component weak. In this way all the boundary nodes (the
> pairs of velocity components at these nodes) became isolated clusters
> and were disregarded at the coarse level. After choosing a smaller
> value for the diagonals the solve now converges fine in roughly the
> same number of iterations as when not setting the block size.
>
> The tricky thing here is the choice of the dummy diagonal number becomes 
> very sensitive - with a typical gamg threshold of 0.01 it can't be more 
> than an order out. Currently I use MatZeroRowsColumns() with a constant 
> value for the diagonal. Because I have a variable viscosity it seems I 
> can no longer do that. Now I can just go in and look at what the 
> diagonal is for the other component at each boundary node and use that 
> value instead (or at least a value that's smaller) - but it all seems a 
> bit hackish to me. Perhaps we're following the wrong approach altogether 
> and should be dealing with the bcs in an other way? Any thoughts would 
> be more than welcome...

I think we should improve the strength of connection measure.  For
example, the (2 bs)×(2 bs) matrix

  A  B
  C  D

is strongly coupled if

  A^{-1} (A - B D^{-1} C)

is far from the identity.  It would normally be measured as

  sqrt(|| A^{-1} B D^{-1} C ||)

In the case of scalars, this reduces to the standard measure

  sqrt(|bc / ad|).


Note that this still doesn't work in more exotic cases like unaligned
anisotropy with certain discretizations, but it should fix the problem
you're seeing.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 818 bytes
Desc: not available
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20141029/36dc4d81/attachment.sig>


More information about the petsc-dev mailing list