[petsc-users] Ainsworth formula to solve saddle point problems / preconditioner for shell matrices
Barry Smith
bsmith at petsc.dev
Fri Sep 10 11:50:03 CDT 2021
Olivier,
I believe the code is ready for your use. MatInvertVariableBlockEnvelope() will take an MPIAIJ matrix (this would be your C*C' matrix, determine the block diagonal envelope automatically and construct a new MPIAIJ matrix that is the inverse of the block diagonal envelope (by inverting each of the blocks). For small blocks, as is your case, the inverse should inexpensive.
I have a test code src/mat/tests/ex178.c that demonstrates its use and tests if for some random blocking across multiple processes.
Please let me know if you have any difficulties. The branch is the same name as before barry/2020-10-08/invert-block-diagonal-aij but it has been rebased so you will need to delete your local copy of the branch and then recreate it after git fetch.
Barry
> On Sep 8, 2021, at 3:09 PM, Olivier Jamond <olivier.jamond at cea.fr> wrote:
>
> Ok thanks a lot! I look forward to hearing from you!
>
> Best regards,
> Olivier
>
> On 08/09/2021 20:56, Barry Smith wrote:
>>
>> Olivier,
>>
>> I'll refresh my memory on this and see if I can finish it up.
>>
>> Barry
>>
>>
>>> On Sep 2, 2021, at 12:38 PM, Olivier Jamond <olivier.jamond at cea.fr <mailto:olivier.jamond at cea.fr>> wrote:
>>>
>>> Dear Barry,
>>>
>>> First I hope that you and your relatives are doing well in these troubled times...
>>>
>>> I allow myself to come back to you about the subject of being able to compute something like '(C*Ct)^(-1)*C', where 'C' is a 'MPC' matrix that is used to impose some boundary conditions for a structural finite element problem:
>>>
>>> [K C^t][U]=[F]
>>> [C 0 ][L] [D]
>>>
>>> as we discussed some time ago, I would like to solve such a problem using the Ainsworth method, which involves this '(C*Ct)^(-1)*C'.
>>>
>>> You kindly started some developments to help me on that, which worked as a 'proof of concept' in sequential, but not yet in parallel, and also kindly suggested that you could extend it to the parallel case (MR: https://gitlab.com/petsc/petsc/-/merge_requests/3544 <https://gitlab.com/petsc/petsc/-/merge_requests/3544>). Can this be still 'scheduled' on your side?
>>>
>>> Sorry again to "harass" you about that...
>>>
>>> Best regards,
>>> Olivier Jamond
>>>
>>> On 03/02/2021 08:45, Olivier Jamond wrote:
>>>> Dear Barry,
>>>>
>>>> I come back to you about this topic. As I wrote last time, this is not a "highly urgent" subject (whereas we will have to deal with it in the next months), but it is an important one (especially since the code I am working on just raised significantly its ambitions). So I just would like to check with you that this is still 'scheduled' on your side.
>>>>
>>>> I am sorry, I feel a little embarrassed about asking you again about your work schedule, but I need some kind of 'visibility' about this topic which will become critical for our application.
>>>>
>>>> Many thanks for helping me on that!
>>>> Olivier
>>>>
>>>> On 02/12/2020 21:34, Barry Smith wrote:
>>>>>
>>>>> Sorry I have not gotten back to you quicker, give me a few days to see how viable it is.
>>>>>
>>>>> Barry
>>>>>
>>>>>
>>>>>> On Nov 25, 2020, at 11:57 AM, Olivier Jamond <olivier.jamond at cea.fr <mailto:olivier.jamond at cea.fr>> wrote:
>>>>>>
>>>>>> Hi Barry,
>>>>>>
>>>>>> I come back to you about the feature to unlock the Ainsworth method for saddle point problems in parallel. If I may ask (sorry for that...), is it still on your schedule (I just checked the branch, and it seems 'stuck')?
>>>>>>
>>>>>> This is not "highly urgent" on my side, but the ability to handle efficiently saddle point problems with iterative solvers will be a critical point for the software I am working on...
>>>>>>
>>>>>> Many thanks (and sorry again for asking about your work schedule...)!
>>>>>> Olivier
>>>>>>
>>>>>> On 12/10/2020 16:49, Barry Smith wrote:
>>>>>>>
>>>>>>>
>>>>>>>> On Oct 12, 2020, at 6:10 AM, Olivier Jamond <olivier.jamond at cea.fr <mailto:olivier.jamond at cea.fr>> wrote:
>>>>>>>>
>>>>>>>> Hi Barry,
>>>>>>>>
>>>>>>>> Thanks for this work! I tried this branch with my code and sequential matrices on a small case: it does work!
>>>>>>>>
>>>>>>>>
>>>>>>> Excellant. I will extend it to the parallel case and get it into our master release.
>>>>>>>
>>>>>>> We'd be interested in hearing about your convergence and timing experiences when you run largish jobs (even sequentially) since this type of problem comes up relatively frequently and we do need a variety of solvers that can handle it while currently we do not have great tools for it.
>>>>>>>
>>>>>>> Barry
>>>>>>>
>>>>>>>> Thanks a lot,
>>>>>>>> Olivier
>>>>>>>>
>>>>>>>> On 09/10/2020 03:50, Barry Smith wrote:
>>>>>>>>>
>>>>>>>>> Olivier,
>>>>>>>>>
>>>>>>>>> The branch barry/2020-10-08/invert-block-diagonal-aij contains an example src/mat/tests/ex178.c that shows how to compute inv(CC'). It works for SeqAIJ matrices.
>>>>>>>>>
>>>>>>>>> Please let us know if it works for you and then I will implement the parallel version.
>>>>>>>>>
>>>>>>>>> Barry
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>> On Oct 8, 2020, at 3:59 PM, Barry Smith <bsmith at petsc.dev <mailto:bsmith at petsc.dev>> wrote:
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> Olivier
>>>>>>>>>>
>>>>>>>>>> I am working on extending the routines now and hopefully push a branch you can try fairly soon.
>>>>>>>>>>
>>>>>>>>>> Barry
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>> On Oct 8, 2020, at 3:07 PM, Jed Brown <jed at jedbrown.org <mailto:jed at jedbrown.org>> wrote:
>>>>>>>>>>>
>>>>>>>>>>> Olivier Jamond <olivier.jamond at cea.fr <mailto:olivier.jamond at cea.fr>> writes:
>>>>>>>>>>>
>>>>>>>>>>>>> Given the structure of C it seems you should just explicitly construct Sp and use GAMG (or other preconditioners, even a direct solver) directly on Sp. Trying to avoid explicitly forming Sp will give you a much slower performing solving for what benefit? If C was just some generic monster than forming Sp might be unrealistic but in your case CCt is is block diagonal with tiny blocks which means (C*Ct)^(-1) is block diagonal with tiny blocks (the blocks are the inverses of the blocks of (C*Ct)).
>>>>>>>>>>>>>
>>>>>>>>>>>>> Sp = Ct*C + Qt * S * Q = Ct*C + [I - Ct * (C*Ct)^(-1)*C] S [I - Ct * (C*Ct)^(-1)*C]
>>>>>>>>>>>>>
>>>>>>>>>>>>> [Ct * (C*Ct)^(-1)*C] will again be block diagonal with slightly larger blocks.
>>>>>>>>>>>>>
>>>>>>>>>>>>> You can do D = (C*Ct) with MatMatMult() then write custom code that zips through the diagonal blocks of D inverting all of them to get iD then use MatPtAP applied to C and iD to get Ct * (C*Ct)^(-1)*C then MatShift() to include the I then MatPtAP or MatRAR to get [I - Ct * (C*Ct)^(-1)*C] S [I - Ct * (C*Ct)^(-1)*C] then finally MatAXPY() to get Sp. The complexity of each of the Mat operations is very low because of the absurdly simple structure of C and its descendants. You might even be able to just use MUMPS to give you the explicit inv(C*Ct) without writing custom code to get iD.
>>>>>>>>>>>>
>>>>>>>>>>>> At this time, I didn't manage to compute iD=inv(C*Ct) without using
>>>>>>>>>>>> dense matrices, what may be a shame because all matrices are sparse . Is
>>>>>>>>>>>> it possible?
>>>>>>>>>>>>
>>>>>>>>>>>> And I get no idea of how to write code to manually zip through the
>>>>>>>>>>>> diagonal blocks of D to invert them...
>>>>>>>>>>>
>>>>>>>>>>> You could use MatInvertVariableBlockDiagonal(), which should perhaps return a Mat instead of a raw array.
>>>>>>>>>>>
>>>>>>>>>>> If you have constant block sizes, MatInvertBlockDiagonalMat will return a Mat.
>>>>>>>>>>
>>>>>>>>>
>>>>>>>
>>>>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20210910/302c01f9/attachment-0001.html>
More information about the petsc-users
mailing list