<div dir="ltr"><div dir="ltr">On Thu, Jan 23, 2025 at 4:19 AM Frank Bramkamp <<a href="mailto:bramkamp@nsc.liu.se">bramkamp@nsc.liu.se</a>> wrote:</div><div class="gmail_quote gmail_quote_container"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><div>Thanks for the quick response,</div><div><br></div><div>so it seems that there is currently not a straight forward way to add fortran solvers to petsc.</div><div><br></div><div><br></div>I will also have a look how I to add a new solver in the petsc source code.<div><br></div><div>I wanted to test some ideas about a modified approach for gram schmidt orthogonalisation.</div><div>It is called “windowed” orthogonalisation. The basic idea is as follows:</div><div>One first starts gram schmidt as usual, until we have e.g. 20 krylov vectors.</div><div><br></div><div>For the next iterations, where it gets more expensive, one does not orthogonalize the new vector</div><div>against all previous vectors but only the last 20 ones. As we start with the first 20 ones</div><div>as usual one has a good basis that mainly covers the lower frequencies.</div><div>(is that also your experience that the first krylov vectors are more in the low frequency range ?!</div><div>I still have to check this myself)</div><div><br></div><div>As for higher iterations we only consider the last 20 vectors for gam schmidt, it makes it cheaper</div><div>(that is the “window”). </div></div></blockquote><div><br></div><div>At least the windowing part is in Saad's book. The restart is not, as far as I remember. When I tried it,</div><div>it failed often enough that I did not keep using it. Putting effort into the PC was more effective for me.</div><div>Certainly a smaller number of vectors is cheaper. We usually use Classical with selective reorthog, so</div><div>we are not paying a sync penalty in parallel that scales with the number of vectors.</div><div><br></div><div> Thanks,</div><div><br></div><div> Matt</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><div>As we do not want to have too large orthogonalization errors, after another 20 iterations</div><div>one makes a restart using deflation where we extract the main eigenvectors from the existing solution so far,</div><div>so we have a new full orthogonal basis after lets say 40 iterations. </div><div><br></div><div>For that one can probably use the approach from DGMRES. So one could simply modify DMRES, </div><div>respectively the gram-schmidt algorithm to allow for a more flexible way how many vectors to consider.</div><div>I think one basically just has to modify the start index in gram schmidt to allow not to use all vectors</div><div>but just the last lets say 20 vectors for orthogonalization.</div><div><br></div><div>There is a paper and matlab implementation where they discuss this approach (I have to look up where I found it).</div><div><br></div><div>Does that approach sound to have some potential to you to make gram schmidt cheaper ?</div><div><br></div><div><br></div><div>The other thing that I wanted to check is the least squares givens rotation. It seems that in the gingko linear solver</div><div>they are reusing certain components within the givens rotation that could potentially make it a bit faster.</div><div>I have to look into the details again.</div><div><br></div><div>What I do to determine how they do it in gingko is, that I let <a href="https://urldefense.us/v3/__http://claude.ai__;!!G_uCfscf7eWS!d5lyAoohIbFIbr0fMSRTmYKgfiy_v7tXjQZRCuNJAQt_RrI8wXRdX6HiTzSjsU1UQvGCoIqQ5AYlGkaRbzc3$" target="_blank">claude.ai</a> crawl petsc code and gingko code.</div><div>Then the AI can explain me the differences and show me in detail where are the differences between the codes,</div><div>so I can take over the best from different codes and the AI can often explain me quite well what the code does</div><div>(at least that is the plan)</div><div><br></div><div><br></div><div>Greetings, Frank</div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div><div> </div><div><br></div><div><br></div><div><br></div><div> </div><div><br><div><br></div><div><br></div><div><br id="m_-3443345557532115692lineBreakAtBeginningOfMessage"><div><br><blockquote type="cite"><div>On 23 Jan 2025, at 04:58, Barry Smith <<a href="mailto:bsmith@petsc.dev" target="_blank">bsmith@petsc.dev</a>> wrote:</div><br><div><div><div><br></div> I think it is best to code the modified GMRES method in C; likely, much of the current GMRES code could be reused. We'd be happy to help incorporate it into PETSc.<div><br></div><div> Barry</div><div><br id="m_-3443345557532115692lineBreakAtBeginningOfMessage"><div><br><blockquote type="cite"><div>On Jan 22, 2025, at 5:11 PM, Matthew Knepley <<a href="mailto:knepley@gmail.com" target="_blank">knepley@gmail.com</a>> wrote:</div><br><div><div dir="ltr"><div dir="ltr">On Wed, Jan 22, 2025 at 3:18 PM Frank Bramkamp <<a href="mailto:bramkamp@nsc.liu.se" target="_blank">bramkamp@nsc.liu.se</a>> wrote:</div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Dear PETSc team,<br>
<br>
I was planning to program a custom KSP method, some modified GMRES.<br>
We mainly use PETSc from Fortran. Therefore I wonder it is possible<br>
to have an interface to a custom KSP solver that is written in fortran.<br>
<br>
I thought of using KSPRegister to register my own routine, but that seems only<br>
available in C. Or is it possible to have a fortran/C wrapper to do that ?<br></blockquote><div><br></div><div>We have wrappers for other functions that take callbacks, such as SNESSetFunction(). What</div><div>we need to do is have a list of Fortran function pointers for this method. They when you</div><div>register, we actually stick in a C wrapper that calls your Fortran function pointer that we have</div><div>stored in our list. It should be straightforward looking at the implementation for something like</div><div>SNESSetFunction(). We would help if you want to try :)</div><div><br></div><div>Barry, is this impacted by your binding rewrite?</div><div><br></div><div> Thanks,</div><div><br></div><div> Matt</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
Thanks, Frank Bramkamp<br>
<br>
<br>
<br>
<br>
<br>
</blockquote></div><div><br clear="all"></div><div><br></div><span class="gmail_signature_prefix">-- </span><br><div dir="ltr" class="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener</div><div><br></div><div><a href="https://urldefense.us/v3/__http://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!eRXZvRoV2JfqzeOdQlhP6UA71kWfULNX_F1C0-Fer5IItdUKkmstwIO3N1VrmApHJYGGisuS6EyybdCUXUwi$" target="_blank">https://www.cse.buffalo.edu/~knepley/</a><br></div></div></div></div></div></div></div></div>
</div></blockquote></div><br></div></div></div></blockquote></div><br></div></div></div></blockquote></div><div><br clear="all"></div><div><br></div><span class="gmail_signature_prefix">-- </span><br><div dir="ltr" class="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener</div><div><br></div><div><a href="https://urldefense.us/v3/__http://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!d5lyAoohIbFIbr0fMSRTmYKgfiy_v7tXjQZRCuNJAQt_RrI8wXRdX6HiTzSjsU1UQvGCoIqQ5AYlGrbZPiwz$" target="_blank">https://www.cse.buffalo.edu/~knepley/</a><br></div></div></div></div></div></div></div></div>