[petsc-users] Neumann BC with non-symmetric matrix

Mohammad Mirzadeh mirzadeh at gmail.com
Wed Feb 24 09:07:16 CST 2016


Barry,

On Wednesday, February 24, 2016, Barry Smith <bsmith at mcs.anl.gov> wrote:

>
> > On Feb 24, 2016, at 12:07 AM, Mohammad Mirzadeh <mirzadeh at gmail.com
> <javascript:;>> wrote:
> >
> >
> > Barry,
> > On Wednesday, February 24, 2016, Barry Smith <bsmith at mcs.anl.gov
> <javascript:;>> wrote:
> >
> > > On Feb 23, 2016, at 11:35 PM, Mohammad Mirzadeh <mirzadeh at gmail.com
> <javascript:;>> wrote:
> > >
> > > Dear all,
> > >
> > > I am dealing with a situation I was hoping to get some suggestions
> here. Suppose after discretizing a poisson equation with purely neumann (or
> periodic) bc I end up with a matrix that is *almost* symmetric, i.e. it is
> symmetric for almost all grid points with the exception of a few points.
> >
> >   How come it is not purely symmetric? The usual finite elements with
> pure Neumann or periodic bc will give a completely symmetric matrix.
> >
> >   Barry
> >
> >
> > So this is a finite difference discretization on adaptive Cartesian
> grids. It turns out that the discretization is non-symmetric at the
> corse-fine interface. It's actually not because of the BC itself.
>
>   Oh yes, I remember that issue vaguely now.
>
> > >
> > > The correct way of handling this problem is by specifying the
> nullspace to MatSetNullSpace. However, since the matrix is non-symmetric in
> general I would need to pass the nullspace of A^T. Now it turns out that if
> A is *sufficiently close to being symmetric*, I can get away with the
> constant vector, which is the nullspace of A and not A^T, but obviously
> this does not always work. Sometimes the KSP converges and in other
> situations the residual stagnates which is to be expected.
> > >
> > > Now, here are my questions (sorry if they are too many!):
> > >
> > > 1) Is there any efficient way of calculating nullspace of A^T in this
> case?
>
>   Not that I know of.
>
> > Is SVD the only way?
>
>   I think if you make sure you only use right preconditioning such as with
> -ksp_type gmres -ksp_pc_side right AND you know that the right hand side is
> automatically in the range of A then the null space of A^T is never needed
> in the computation. Is your right hand side in the range of A?
>
>   Barry


At the PDE level the compatibility condition is satisfied. However I
suspect that at the discrete level the rhs is not not exactly in the range.
The reason for my suspicion is that I know for a fact that my
discretization is not conservative at the hanging nodes.

Thanks for the suggestion. I'll give it a try. Howerver, does GMRES
fundamentally behave differently than BiCGSTAB for these systems? I have
seen people saying that it can keep the solution in the range if rhs is
already in the range but I thought any KSP would do the same?


> > >
> > > 2) I have tried fixing the solution at an arbitrary point, and while
> it generally works, for some problems I get numerical artifacts, e.g.
> slight asymmetry in the solution and/or increased error close to the point
> where I fix the solution. Is this, more or less, expected as a known
> artifact?
>
>   Yeah this approach is not good. We never recommend it.
> > >
> > > 3) An alternative to 2 is to enforce some global constraint on the
> solution, e.g. to require that the average be zero. My question here is
> two-fold:
> >
> >   Requiring the average be zero is exactly the same as providing a null
> space of the constant function. Saying the average is zero is the same as
> saying the solution is orthogonal to the constant function. I don't see any
> reason to introduce the Lagrange multiplier and all its complications
> inside of just providing the constant null space.
> >
> > Is this also true at the discrete level when the matrix is
> non-symmetric? I have always viewed this as just a constraint that could
> really be anything.
> >
> > >
> > > 3-1) Is this generally any better than solution 2, in terms of not
> messing too much with the condition number of the matrix?
> > >
> > > 3-2) I don't quite know how to implement this using PETSc. Generally
> speaking I'd like to solve
> > >
> > > | A        U |   | X |   | B |
> > > |            | * |   | = |   |
> > > | U^T      0 |   | s |   | 0 |
> > >
> > >
> > > where U is a constant vector (of ones) and s is effectively a Lagrange
> multiplier. I suspect I need to use MatCreateSchurComplement and pass that
> to the KSP? Or do I need create my own matrix type from scratch through
> MatCreateShell?
> > >
> > > Any help is appreciated!
> > >
> > > Thanks,
> > > Mohammad
> > >
> > >
> >
> >
> >
> > --
> > Sent from Gmail Mobile
>
>

-- 
Sent from Gmail Mobile
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20160224/a66eac97/attachment.html>


More information about the petsc-users mailing list