<html><head><meta http-equiv="Content-Type" content="text/html; charset=us-ascii"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" class=""><br class=""><div><br class=""><blockquote type="cite" class=""><div class="">On Oct 12, 2020, at 6:10 AM, Olivier Jamond <<a href="mailto:olivier.jamond@cea.fr" class="">olivier.jamond@cea.fr</a>> wrote:</div><br class="Apple-interchange-newline"><div class="">
<meta http-equiv="Content-Type" content="text/html;
charset=windows-1252" class="">
<div class=""><p class="">Hi Barry,</p><p class="">Thanks for this work! I tried this branch with my code and
sequential matrices on a small case: it does work!</p><div class=""><br class=""></div></div></div></blockquote> Excellant. I will extend it to the parallel case and get it into our master release. </div><div><br class=""></div><div> We'd be interested in hearing about your convergence and timing experiences when you run largish jobs (even sequentially) since this type of problem comes up relatively frequently and we do need a variety of solvers that can handle it while currently we do not have great tools for it.</div><div><br class=""></div><div> Barry</div><div><br class=""><blockquote type="cite" class=""><div class=""><div class=""><p class="">Thanks a lot,<br class="">
Olivier<br class="">
</p>
<div class="moz-cite-prefix">On 09/10/2020 03:50, Barry Smith wrote:<br class="">
</div>
<blockquote type="cite" cite="mid:218E7696-2A50-42A3-8CF2-D58FCC17B855@petsc.dev" class="">
<meta http-equiv="Content-Type" content="text/html;
charset=windows-1252" class="">
<div class=""><br class="">
</div>
Olivier,
<div class=""><br class="">
</div>
<div class=""> The branch <b style="color: rgb(200, 20, 201);
font-family: Menlo; font-size: 14px;" class="">barry/2020-10-08/invert-block-diagonal-aij</b> contains
an example src/mat/tests/ex178.c that shows how to compute
inv(CC'). It works for SeqAIJ matrices.</div>
<div class=""><br class="">
</div>
<div class=""> Please let us know if it works for you and then
I will implement the parallel version.</div>
<div class=""><br class="">
</div>
<div class=""> Barry</div>
<div class=""><br class="">
<div class=""><br class="">
<blockquote type="cite" class="">
<div class="">On Oct 8, 2020, at 3:59 PM, Barry Smith <<a href="mailto:bsmith@petsc.dev" class="" moz-do-not-send="true">bsmith@petsc.dev</a>> wrote:</div>
<br class="Apple-interchange-newline">
<div class="">
<div class=""><br class="">
Olivier<br class="">
<br class="">
I am working on extending the routines now and
hopefully push a branch you can try fairly soon.<br class="">
<br class="">
Barry<br class="">
<br class="">
<br class="">
<blockquote type="cite" class="">On Oct 8, 2020, at 3:07
PM, Jed Brown <<a href="mailto:jed@jedbrown.org" class="" moz-do-not-send="true">jed@jedbrown.org</a>>
wrote:<br class="">
<br class="">
Olivier Jamond <<a href="mailto:olivier.jamond@cea.fr" class="" moz-do-not-send="true">olivier.jamond@cea.fr</a>>
writes:<br class="">
<br class="">
<blockquote type="cite" class="">
<blockquote type="cite" class=""> Given the
structure of C it seems you should just explicitly
construct Sp and use GAMG (or other
preconditioners, even a direct solver) directly on
Sp. Trying to avoid explicitly forming Sp will
give you a much slower performing solving for what
benefit? If C was just some generic monster than
forming Sp might be unrealistic but in your case
CCt is is block diagonal with tiny blocks which
means (C*Ct)^(-1) is block diagonal with tiny
blocks (the blocks are the inverses of the blocks
of (C*Ct)).<br class="">
<br class="">
Sp = Ct*C + Qt * S * Q = Ct*C + [I - Ct *
(C*Ct)^(-1)*C] S [I - Ct * (C*Ct)^(-1)*C]<br class="">
<br class="">
[Ct * (C*Ct)^(-1)*C] will again be block diagonal
with slightly larger blocks.<br class="">
<br class="">
You can do D = (C*Ct) with MatMatMult() then write
custom code that zips through the diagonal blocks
of D inverting all of them to get iD then use
MatPtAP applied to C and iD to get Ct *
(C*Ct)^(-1)*C then MatShift() to include the I
then MatPtAP or MatRAR to get [I - Ct *
(C*Ct)^(-1)*C] S [I - Ct * (C*Ct)^(-1)*C] then
finally MatAXPY() to get Sp. The complexity of
each of the Mat operations is very low because of
the absurdly simple structure of C and its
descendants. You might even be able to just use
MUMPS to give you the explicit inv(C*Ct) without
writing custom code to get iD.<br class="">
</blockquote>
<br class="">
At this time, I didn't manage to compute
iD=inv(C*Ct) without using <br class="">
dense matrices, what may be a shame because all
matrices are sparse . Is <br class="">
it possible?<br class="">
<br class="">
And I get no idea of how to write code to manually
zip through the <br class="">
diagonal blocks of D to invert them...<br class="">
</blockquote>
<br class="">
You could use MatInvertVariableBlockDiagonal(), which
should perhaps return a Mat instead of a raw array.<br class="">
<br class="">
If you have constant block sizes,
MatInvertBlockDiagonalMat will return a Mat.<br class="">
</blockquote>
<br class="">
</div>
</div>
</blockquote>
</div>
<br class="">
</div>
</blockquote>
</div>
</div></blockquote></div><br class=""></body></html>