[petsc-users] Neumann BC with non-symmetric matrix
Barry Smith
bsmith at mcs.anl.gov
Wed Feb 24 13:21:11 CST 2016
> On Feb 24, 2016, at 9:07 AM, Mohammad Mirzadeh <mirzadeh at gmail.com> wrote:
>
> Barry,
>
> On Wednesday, February 24, 2016, Barry Smith <bsmith at mcs.anl.gov> wrote:
>
> > On Feb 24, 2016, at 12:07 AM, Mohammad Mirzadeh <mirzadeh at gmail.com> wrote:
>
>
> At the PDE level the compatibility condition is satisfied. However I suspect that at the discrete level the rhs is not not exactly in the range. The reason for my suspicion is that I know for a fact that my discretization is not conservative at the hanging nodes.
Could be.
>
> Thanks for the suggestion. I'll give it a try. Howerver, does GMRES fundamentally behave differently than BiCGSTAB for these systems? I have seen people saying that it can keep the solution in the range if rhs is already in the range but I thought any KSP would do the same?
Here is the deal. There are two issues here
1) Making sure that b is the in the range of A. If b is not in the range of A then it is obviously impossible to find an x such that A x = b. If we divide b into a part in the range of A called br and the rest call it bp then b = br + bp and one can solve Ax = br and the left over residual is bp. Normally if you run GMRES with an inconsistent right hand side (that is bp != 0) it will solve Ax = br automatically and thus give you a "least squares" answer which is likely what you want. These means in some sense you don't really need to worry about making sure that b is in the range of A. But the residuals of GMRES will stagnate, which makes sense because it cannot get rid of the bp part. In the least squares sense however GMRES has converged. If you provide MatSetTransposeNullSpace() then KSP automatically removes this space from b when it starts the Krylov method, this is nice because the GMRES residuals will go to zero.
2) Making sure that huge multiples of the null space of A do not get into the computed solution.
With left preconditioning Krylov methods construct the solution in the space {Bb, BABb, BABABb, ..} if the range of B contains entries in the null space of A then the Krylov space will contain vectors in the null space of A. What can happen is that in each iteration of the Krylov space those entries grow and you end up with a solution x + xn where xn is in the null space of A and very large, thus it looks like GMRES is not converging since the solution "jump" each iteration. If you force the range of B to exclude the null space of A, that is project out the null space of A after applying B then nothing in the null space ever gets into the Krylov space and you get the "minimum norm" solution to the least squares problem which is what you almost always want. When you provide MatSetNullSpace() the KSP solvers automatically remove the null space after applying B.
All this stuff applies for any Krylov method.
So based on my understanding, you should just provide the null space that you can and forget out the transpose null space and use left preconditioning with GMRES (forget about what I said about trying with right preconditioning). Let GMRES iterate until the residual norm has stabilized and use the resulting solution which is the least squares solution. If you are using KSP inside SNES you may need to set SNESSetMaxLinearSolveFailures() to a large value so it doesn't stop when it thinks the GMRES has failed.
Barry
Matt and Jed, please check my logic I often flip my ranges/null spaces and may have incorrect presentation above.
>
>
> > >
> > > 2) I have tried fixing the solution at an arbitrary point, and while it generally works, for some problems I get numerical artifacts, e.g. slight asymmetry in the solution and/or increased error close to the point where I fix the solution. Is this, more or less, expected as a known artifact?
>
> Yeah this approach is not good. We never recommend it.
> > >
> > > 3) An alternative to 2 is to enforce some global constraint on the solution, e.g. to require that the average be zero. My question here is two-fold:
> >
> > Requiring the average be zero is exactly the same as providing a null space of the constant function. Saying the average is zero is the same as saying the solution is orthogonal to the constant function. I don't see any reason to introduce the Lagrange multiplier and all its complications inside of just providing the constant null space.
> >
> > Is this also true at the discrete level when the matrix is non-symmetric? I have always viewed this as just a constraint that could really be anything.
> >
> > >
> > > 3-1) Is this generally any better than solution 2, in terms of not messing too much with the condition number of the matrix?
> > >
> > > 3-2) I don't quite know how to implement this using PETSc. Generally speaking I'd like to solve
> > >
> > > | A U | | X | | B |
> > > | | * | | = | |
> > > | U^T 0 | | s | | 0 |
> > >
> > >
> > > where U is a constant vector (of ones) and s is effectively a Lagrange multiplier. I suspect I need to use MatCreateSchurComplement and pass that to the KSP? Or do I need create my own matrix type from scratch through MatCreateShell?
> > >
> > > Any help is appreciated!
> > >
> > > Thanks,
> > > Mohammad
> > >
> > >
> >
> >
> >
> > --
> > Sent from Gmail Mobile
>
>
>
> --
> Sent from Gmail Mobile
More information about the petsc-users
mailing list