[petsc-users] Ainsworth formula to solve saddle point problems / preconditioner for shell matrices

Barry Smith bsmith at petsc.dev
Tue Oct 6 10:51:37 CDT 2020



> On Oct 6, 2020, at 6:57 AM, Olivier Jamond <olivier.jamond at cea.fr> wrote:
> 
> 
> On 03/10/2020 00:23, Barry Smith wrote:
>>    I think what Jed is saying is that you should just actually build your preconditioner for your Ct*C + Qt*S*Q operator with S. Because Ct is tall and skinny the eigenstructure of Ct*C + Qt*S*Q is just the eigenstructure of S with a low rank "modification" and Krylov methods (GMRES) are good at solving problems where the eigenstructure of the preconditioner is only a small rank modification of the eigenstructure of the operator you are supply to GMRES. In the best situation each new iteration of GMRES corrects one more of the "rogue" eigen directions. I would first use a direct solver with S just to test how well it works as a preconditioner and then switch to GAMG or whatever should work efficiently for solving your particular S matrix.
>> 
>>   I'd be interested in hearing how well the Ainsworth Formula works, it is something that might be worth adding to PCFIELDSPLIT.
>> 
>> 
>>   Barry
> 
> Hi Barry,
> 
> Thanks for these clarifications.
> 
> To give some context, the test I am working on is a traction on an elastoplastic cube in large strain on which I apply 2% of strain at the first loading increment. The cube has 14739 dofs, and the number of rows of the C matrix is 867.
> 
> In this simple case, the C matrix just refers to simple dirichlet conditions. Then Q is diagonal with 1. on dofs without dirichlet on 0. for dofs with dirichlets. Q'*S*Q is like S with zeros on lines/columns referring to dofs with dirichlet, and then C'*C just re-add non null value on the diagonal for the dofs with dirichlet. In the end, I feel that in this case the ainsworth method just do exactly the same as row/column elimination that can be done with MatZeroRowsColumns and the x and b optional vectors provided.
> 
> On this test, with '-ksp_rtol 1.e-9' and '-ksp_type gmres', using S as a preconditionning matrix and a direct solver gives 65 iterations of the gmres for my first newton iteration (where S is SPD) and between 170 and 290 for the next ones (S is still symmetric but has negative eigenvalues). If I use '-pc_type gamg', the number of iterations of the gmres for the first (SPD) newton iteration is (14 with Sp / 23 with S), and for the next ones (not SPD) it is (~45 with Sp / ~180 with S).

   Given the structure of C it seems you should just explicitly construct Sp and use GAMG (or other preconditioners, even a direct solver) directly on Sp. Trying to avoid explicitly forming Sp will give you a much slower performing solving for what benefit? If C was just some generic monster than forming Sp might be unrealistic but in your case CCt is is block diagonal with tiny blocks which means (C*Ct)^(-1) is block diagonal with tiny blocks (the blocks are the inverses of the blocks of (C*Ct)).

    Sp = Ct*C  + Qt * S * Q = Ct*C  +  [I - Ct * (C*Ct)^(-1)*C] S [I - Ct * (C*Ct)^(-1)*C] 

[Ct * (C*Ct)^(-1)*C] will again be block diagonal with slightly larger blocks.

You can do D = (C*Ct) with MatMatMult() then write custom code that zips through the diagonal blocks of D inverting all of them to get iD then use MatPtAP applied to C and iD to get Ct * (C*Ct)^(-1)*C then MatShift() to include the I then MatPtAP or MatRAR to get [I - Ct * (C*Ct)^(-1)*C] S [I - Ct * (C*Ct)^(-1)*C]  then finally MatAXPY() to get Sp. The complexity of each of the Mat operations is very low because of the absurdly simple structure of C and its descendants.   You might even be able to just use MUMPS to give you the explicit inv(C*Ct) without writing custom code to get iD.

  Perhaps I am missing something? Note you can also prototype this process in Matlab very quickly to find any glitches. Hopefully you will find [I - Ct * (C*Ct)^(-1)*C] S [I - Ct * (C*Ct)^(-1)*C]  has only a slightly "bulked out" sparsity of S.



> 
> In this case with only simple dirichlets, I think I would like that the PCApply does something like: (I-Q)*jacobi(Ct*C)*(I-Q) + Q*precond(S)*Q. BUt I am not sure how to do that (I am quite newbie with petsc)... With a PCShell and PCShellSetApply?

  I don't think there is any reason to think that using I-Q)*jacobi(Ct*C)*(I-Q) + Q*precond(S)*Q.  would be a particularly good preconditioner for Sp. That is much better than using a preconditioner built from S.  But you can use PCCOMPOSITE and PCSHELL with KSP inside for the two jacobi(Ct*C) and precond(S).

Barry




> 
> In the end, if we found something that works well with the ainsworth formula, it would be nice to have it natively with PCFIELDSPLIT!
> 
> Many thanks,
> Olivier
> 



More information about the petsc-users mailing list