[petsc-users] Neumann BC with non-symmetric matrix
Barry Smith
bsmith at mcs.anl.gov
Thu Feb 25 17:05:44 CST 2016
> On Feb 25, 2016, at 4:18 PM, Mohammad Mirzadeh <mirzadeh at gmail.com> wrote:
>
> Barry,
>
> Thanks a lot for the detailed discussion. Things make much more sense now, especially I was confused why the manual says to provide call both 'MatSetNullSpace' and 'MatSetTransposeNullSpace'. I have couple of more questions and some observations I have made since yesterday.
>
> 1) Is there a systematic way to tell KSP to stop when it stagnates? I am _not_ using SNES.
No, I added the issue https://bitbucket.org/petsc/petsc/issues/122/ksp-convergence-test-for-inconsistent writing the code for a new test is pretty simple, but you need to decide mathematically how you are going to detect "stagnation".
>
> 2) Once KSP converges to the least-square solution, the residual must be in the nullspace of A^T because otherwise it would have been reduced by the solver. So is it (at lest theoretically) possible to use the residual vector as an approximate basis for the n(A^T)? In general this wouldn't be enough but I'm thinking since the nullspace is one-dimensional, maybe I could use the residual itself to manually project solution onto range of A after calling KSPSolve?
I don't see why this wouldn't work. Just run one initial solve till stagnation and then use the resulting residual as the null space for A^T for future solves (with the same matrix, of course). It could be interesting to plot the residual to see what it looks like and it that makes sense physically
>
> 3) Are preconditoners applied to the left by default? If not which one are and which one are not?
It actually depends on the KSP, some algorithms only support right preconditioning, some implementations only support one or the other (depending on who wrote it) and some support both. In PETSc CG, GMRES, and BiCGstab use left by default, both GMRES and BiCGstab also support right.
>
> 4) So then if I provide the nullspace of A, KSP residual should converge, correct? I have made a few observations:
>
> 4-1) When I provide the nullspace of A through MatSetNullSpace, I (generally) do see the residual (preconditioned) become very small (~1e-12 or so) but then it sometimes stagnates (say around 1e-10). Is this normal?
There is only some much convergence one can expect for linear solvers; how far one can drive down the residual depends at least in part on the conditioning of the matrix. The higher the conditioning the less you can get in tolerance.
>
> 4-2) Another observation is that the true residual stagnates way earlier which I assume is an indication that the RHS is in fact not in the range of A.
You can hope this is the case.
Of course one cannot really know if the residual is stagnating due to an inconsistent right hand side or for some other reason. But if it stagnates at 10^-4 it is probably due to inconsistent right hand side if it stagnates at 10^-12 the right hand side is probably consistent. If it stagnates at 10^-8 then it could be due to inconsistent rhs and or some other reason.
>
> 4-3) Also, I've seen that the choice of KSP and PC have considerable effects on the solver. For instance, by default I use hypre+bcgs and I have noticed I need to change coarse-grid relaxation from Gaussian Elimination to symmetric-sor/Jacobi otherwise hypre has issues. I assume this is because the AMG preconditioner is also singular on the coarse level?
Yes this is likely the reason.
>
> 4-4) Along the same lines, I tried a couple of other PCs such as {jacobi, sor, gamg, ilu} and none of them were able to converge with bcgs as the KSP. However, with gmres, almost all of them converge
Sure this is expected
> with the exception of gamg.
>
> 4-5) If I use gmres instead of bcgs, and for any of {jacobi, sor, ilu}, the true residual seems to be generally 2-3 orders of magnitude smaller (1e-6 vs 1e-3). I suppose this is just because gmres is more robust?
Yes, with a single system I would recommend sticking with GMRES and avoiding Bcgs.
>
> 4-6) With hypre, the true residual is always larger (~1e-3) than other PCs no matter if I use bcgs or gmres.
ok.
>
> Sorry for the long discussion but this has turned out to be quite educational for me!
>
> Thanks,
> Mohammad
>
> On Wed, Feb 24, 2016 at 2:21 PM, Barry Smith <bsmith at mcs.anl.gov> wrote:
>
> > On Feb 24, 2016, at 9:07 AM, Mohammad Mirzadeh <mirzadeh at gmail.com> wrote:
> >
> > Barry,
> >
> > On Wednesday, February 24, 2016, Barry Smith <bsmith at mcs.anl.gov> wrote:
> >
> > > On Feb 24, 2016, at 12:07 AM, Mohammad Mirzadeh <mirzadeh at gmail.com> wrote:
> >
> >
> > At the PDE level the compatibility condition is satisfied. However I suspect that at the discrete level the rhs is not not exactly in the range. The reason for my suspicion is that I know for a fact that my discretization is not conservative at the hanging nodes.
>
> Could be.
>
> >
> > Thanks for the suggestion. I'll give it a try. Howerver, does GMRES fundamentally behave differently than BiCGSTAB for these systems? I have seen people saying that it can keep the solution in the range if rhs is already in the range but I thought any KSP would do the same?
>
>
> Here is the deal. There are two issues here
>
> 1) Making sure that b is the in the range of A. If b is not in the range of A then it is obviously impossible to find an x such that A x = b. If we divide b into a part in the range of A called br and the rest call it bp then b = br + bp and one can solve Ax = br and the left over residual is bp. Normally if you run GMRES with an inconsistent right hand side (that is bp != 0) it will solve Ax = br automatically and thus give you a "least squares" answer which is likely what you want. These means in some sense you don't really need to worry about making sure that b is in the range of A. But the residuals of GMRES will stagnate, which makes sense because it cannot get rid of the bp part. In the least squares sense however GMRES has converged. If you provide MatSetTransposeNullSpace() then KSP automatically removes this space from b when it starts the Krylov method, this is nice because the GMRES residuals will go to zero.
>
> 2) Making sure that huge multiples of the null space of A do not get into the computed solution.
>
> With left preconditioning Krylov methods construct the solution in the space {Bb, BABb, BABABb, ..} if the range of B contains entries in the null space of A then the Krylov space will contain vectors in the null space of A. What can happen is that in each iteration of the Krylov space those entries grow and you end up with a solution x + xn where xn is in the null space of A and very large, thus it looks like GMRES is not converging since the solution "jump" each iteration. If you force the range of B to exclude the null space of A, that is project out the null space of A after applying B then nothing in the null space ever gets into the Krylov space and you get the "minimum norm" solution to the least squares problem which is what you almost always want. When you provide MatSetNullSpace() the KSP solvers automatically remove the null space after applying B.
>
> All this stuff applies for any Krylov method.
>
> So based on my understanding, you should just provide the null space that you can and forget out the transpose null space and use left preconditioning with GMRES (forget about what I said about trying with right preconditioning). Let GMRES iterate until the residual norm has stabilized and use the resulting solution which is the least squares solution. If you are using KSP inside SNES you may need to set SNESSetMaxLinearSolveFailures() to a large value so it doesn't stop when it thinks the GMRES has failed.
>
> Barry
>
> Matt and Jed, please check my logic I often flip my ranges/null spaces and may have incorrect presentation above.
> >
> >
> > > >
> > > > 2) I have tried fixing the solution at an arbitrary point, and while it generally works, for some problems I get numerical artifacts, e.g. slight asymmetry in the solution and/or increased error close to the point where I fix the solution. Is this, more or less, expected as a known artifact?
> >
> > Yeah this approach is not good. We never recommend it.
> > > >
> > > > 3) An alternative to 2 is to enforce some global constraint on the solution, e.g. to require that the average be zero. My question here is two-fold:
> > >
> > > Requiring the average be zero is exactly the same as providing a null space of the constant function. Saying the average is zero is the same as saying the solution is orthogonal to the constant function. I don't see any reason to introduce the Lagrange multiplier and all its complications inside of just providing the constant null space.
> > >
> > > Is this also true at the discrete level when the matrix is non-symmetric? I have always viewed this as just a constraint that could really be anything.
> > >
> > > >
> > > > 3-1) Is this generally any better than solution 2, in terms of not messing too much with the condition number of the matrix?
> > > >
> > > > 3-2) I don't quite know how to implement this using PETSc. Generally speaking I'd like to solve
> > > >
> > > > | A U | | X | | B |
> > > > | | * | | = | |
> > > > | U^T 0 | | s | | 0 |
> > > >
> > > >
> > > > where U is a constant vector (of ones) and s is effectively a Lagrange multiplier. I suspect I need to use MatCreateSchurComplement and pass that to the KSP? Or do I need create my own matrix type from scratch through MatCreateShell?
> > > >
> > > > Any help is appreciated!
> > > >
> > > > Thanks,
> > > > Mohammad
> > > >
> > > >
> > >
> > >
> > >
> > > --
> > > Sent from Gmail Mobile
> >
> >
> >
> > --
> > Sent from Gmail Mobile
>
>
More information about the petsc-users
mailing list