<div dir="ltr">So it turns out that ex49 was not dealing with block size correctly. MatGetSubMatrix gets the block sizes of the new matrix from the IS. Also the BC is was not blocked so I made it blocked.<div><br></div><div>Could your code be making the same mistakes?<br><div><br></div><div>I've put in a pull request. </div><div><br></div><div>I have to run but I just realized that I did not update the gold copy of the regression test. I will add that later. </div><div><br></div><div>Mark </div></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Sep 18, 2014 at 10:41 AM, Mark Adams <span dir="ltr"><<a href="mailto:mfadams@lbl.gov" target="_blank">mfadams@lbl.gov</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Thanks for updating me Stephan,<div><br></div><div>This is a common problem. I'm surprised we have not seen it more.</div><div><br></div><div>Note, the problem with the solver dying when MatSetBlockSize is set (correctly). Is some sort of bug. You have Stokes and use FieldSplit, so GAMG is working on a 2D elasticity like problem. Is that correct? And the equations are ordered node first: [v0_x, v0_y, v1_x, v1_y, v2_x, ....] correct?</div><div><br></div><div>Also, have you tried this with hypre? It would be useful to see if hypre behaves the same way.<br><div class="gmail_extra"><br></div><div class="gmail_extra">Anyway, I think the best way to fix the zero diagonal problem is to fill in the low rank thing (R in QR) with dummy diagonal entries. This is in GAMG code so I can stick a fix in. This is what I did in my old AMG solver. These dummy values don't matter because, as you noted, P (Q) has zero columns for these equations and so nothing gets interpolated into or out of these equations.</div><div class="gmail_extra"><br></div><div class="gmail_extra">I'll try to do this today and put in a pull request.</div><span class="HOEnZb"><font color="#888888"><div class="gmail_extra"><br></div><div class="gmail_extra">Mark</div></font></span><div><div class="h5"><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Sep 18, 2014 at 10:05 AM, Stephan Kramer <span dir="ltr"><<a href="mailto:s.kramer@imperial.ac.uk" target="_blank">s.kramer@imperial.ac.uk</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><span>On 18/09/14 13:45, Mark Adams wrote:<br>
> Can you remind me how this problem occurred? I should make it a switch<br>
> in the code because it is not super cheap to do and you are the only<br>
> ones to have had this problem.<br>
<br>
</span>1) The problem is easily reproducible; just run:<br>
<span><br>
./ex49 -elas_pc_type gamg -mx 100 -my 100 -mat_no_inode<br>
<br>
</span>in src/ksp/ksp/examples/tutorials. See output below. (The -mat_no_inode is not necessary but simplifies things).<br>
<br>
2) The problem is caused by the fact that the aggregates at the finest level (that form the nodes of the one-but-finest level) cannot be guaranteed to be large enough to *independently* support all the modes of the near nullspace.<br>
<br>
For example in a 2D Stokes velocity problem, if one of the aggregates is just an isolated node (and afaics this is unavoidable) even if you use blocking there are only 2 DOFs at the finest level associated with that aggregate whereas it is represented by 3 DOFs at the coarser level (1 for each of the near nullspace modes). Therefore the prolongation operator from the coarser to the finest level is necessarily rank deficient. In fact the orthogonalisation will simply produce a zero column in this case.<br>
<br>
The zero column in the prolongator results in a zero diagonal entry on the coarse-level operator (P^T A P), which causes the smoothing at that level to fail (only with SOR, as Jacobi magically fixes this)<br>
<br>
3) The likelihood of hitting this failure increases if you do not set the block size (MatSetBlockSize). As result there was a whole, only tangently related, discussion on the fact that we got very poor performance when setting the block size (order of magnitude more iterations with exactly the same matrix). Interesting as it may be, that whole discussion in itself had nothing to do with the issue above, which may occur whether or not you set the block size.<br>
<br>
IMHO the method is simply not robust when using the default smoothing settings. However if you prefer the fix to be optional, we would be very happy with that solution as well<br>
Thanks for looking into this<br>
<br>
Cheers<br>
Stephan<br>
<br>
skramer@gyre:~/git/petsc/src/ksp/ksp/examples/tutorials$ ./ex49 -elas_pc_type gamg -mx 100 -my 100 -mat_no_inode<br>
<span>[0]PETSC ERROR: --------------------- Error Message --------------------------------------------------------------<br>
[0]PETSC ERROR: Arguments are incompatible<br>
[0]PETSC ERROR: Zero diagonal on row 1<br>
</span>[0]PETSC ERROR: See <a href="http://www.mcs.anl.gov/petsc/documentation/faq.html" target="_blank">http://www.mcs.anl.gov/petsc/documentation/faq.html</a> for trouble shooting.<br>
[0]PETSC ERROR: Petsc Development GIT revision: v3.5.2-300-gdaea54c GIT Date: 2014-09-17 20:06:25 -0500<br>
[0]PETSC ERROR: ./ex49 on a arch-linux2-c-debug named gyre by skramer Thu Sep 18 14:51:52 2014<br>
[0]PETSC ERROR: Configure options --download-fblaslapack=1 --download-blacs=1 --download-scalapack=1 --download-ptscotch=1 --download-mumps=1 --download-hypre=1 --download-suitesparse=1 --download-ml=1 --with-debugging=1 --prefix=/home/skramer/petsc/master-debug-18092014<br>
[0]PETSC ERROR: #1 MatInvertDiagonal_SeqAIJ() line 1728 in /home/skramer/git/petsc/src/mat/impls/aij/seq/aij.c<br>
[0]PETSC ERROR: #2 MatSOR_SeqAIJ() line 1760 in /home/skramer/git/petsc/src/mat/impls/aij/seq/aij.c<br>
[0]PETSC ERROR: #3 MatSOR() line 3644 in /home/skramer/git/petsc/src/mat/interface/matrix.c<br>
[0]PETSC ERROR: #4 PCApply_SOR() line 35 in /home/skramer/git/petsc/src/ksp/pc/impls/sor/sor.c<br>
[0]PETSC ERROR: #5 PCApply() line 440 in /home/skramer/git/petsc/src/ksp/pc/interface/precon.c<br>
[0]PETSC ERROR: #6 KSP_PCApply() line 230 in /home/skramer/git/petsc/include/petsc-private/kspimpl.h<br>
[0]PETSC ERROR: #7 KSPSolve_Chebyshev() line 414 in /home/skramer/git/petsc/src/ksp/ksp/impls/cheby/cheby.c<br>
[0]PETSC ERROR: #8 KSPSolve() line 460 in /home/skramer/git/petsc/src/ksp/ksp/interface/itfunc.c<br>
[0]PETSC ERROR: #9 PCMGMCycle_Private() line 19 in /home/skramer/git/petsc/src/ksp/pc/impls/mg/mg.c<br>
[0]PETSC ERROR: #10 PCMGMCycle_Private() line 48 in /home/skramer/git/petsc/src/ksp/pc/impls/mg/mg.c<br>
[0]PETSC ERROR: #11 PCApply_MG() line 337 in /home/skramer/git/petsc/src/ksp/pc/impls/mg/mg.c<br>
[0]PETSC ERROR: #12 PCApply() line 440 in /home/skramer/git/petsc/src/ksp/pc/interface/precon.c<br>
[0]PETSC ERROR: #13 KSP_PCApply() line 230 in /home/skramer/git/petsc/include/petsc-private/kspimpl.h<br>
[0]PETSC ERROR: #14 KSPInitialResidual() line 63 in /home/skramer/git/petsc/src/ksp/ksp/interface/itres.c<br>
[0]PETSC ERROR: #15 KSPSolve_GMRES() line 234 in /home/skramer/git/petsc/src/ksp/ksp/impls/gmres/gmres.c<br>
[0]PETSC ERROR: #16 KSPSolve() line 460 in /home/skramer/git/petsc/src/ksp/ksp/interface/itfunc.c<br>
[0]PETSC ERROR: #17 solve_elasticity_2d() line 1053 in /home/skramer/git/petsc/src/ksp/ksp/examples/tutorials/ex49.c<br>
[0]PETSC ERROR: #18 main() line 1104 in /home/skramer/git/petsc/src/ksp/ksp/examples/tutorials/ex49.c<br>
<div><div>[0]PETSC ERROR: ----------------End of Error Message -------send entire error message to petsc-maint@mcs.anl.gov----------<br>
<br>
<br>
</div></div></blockquote></div><br></div></div></div></div></div>
</blockquote></div><br></div>