<div dir="ltr">Iterative solvers have to be designed for your particular operator.<div>You want to look in your field to see how people solve these problems. (eg, zeros on the diagonal will need something like a block solver or maybe ILU with a particular ordering)</div><div>I don't personally know anything about this operator. Perhaps someone else can help you, but you will probably need to find this yourself.</div><div>Also, hypre's ILUTP is not well supported. You could use our (serial) ILU on one processor to experiment with (<a href="https://petsc.org/main/manualpages/PC/PCILU">https://petsc.org/main/manualpages/PC/PCILU</a>).</div><div><br></div><div>Mark</div><div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, Jan 31, 2024 at 6:51 AM Niclas Götting <<a href="mailto:ngoetting@itp.uni-bremen.de">ngoetting@itp.uni-bremen.de</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Hi all,<br>
<br>
I've been trying for the last couple of days to solve a linear system <br>
using iterative methods. The system size itself scales exponentially <br>
(64^N) with the number of components, so I receive sizes of<br>
<br>
* (64, 64) for one component<br>
* (4096, 4096) for two components<br>
* (262144, 262144) for three components<br>
<br>
I can solve the first two cases with direct solvers and don't run into <br>
any problems; however, the last case is the first nontrivial and it's <br>
too large for a direct solution, which is why I believe that I need an <br>
iterative solver.<br>
<br>
As I know the solution for the first two cases, I tried to reproduce <br>
them using GMRES and failed on the second, because GMRES didn't converge <br>
and seems to have been going in the wrong direction (the vector to which <br>
it "tries" to converge is a totally different one than the correct <br>
solution). I went as far as -ksp_max_it 1000000, which takes orders of <br>
magnitude longer than the LU solution and I'd intuitively think that <br>
GMRES should not take *that* much longer than LU. Here is the <br>
information I have about this (4096, 4096) system:<br>
<br>
* not symmetric (which is why I went for GMRES)<br>
* not singular (SVD: condition number 1.427743623238e+06, 0 of 4096 <br>
singular values are (nearly) zero)<br>
* solving without preconditioning does not converge (DIVERGED_ITS)<br>
* solving with iLU and natural ordering fails due to zeros on the diagonal<br>
* solving with iLU and RCM ordering does not converge (DIVERGED_ITS)<br>
<br>
After some searching I also found <br>
[this](<a href="http://arxiv.org/abs/1504.06768" rel="noreferrer" target="_blank">http://arxiv.org/abs/1504.06768</a>) paper, which mentions the use of <br>
ILUTP, which I believe in PETSc should be used via hypre, which, <br>
however, threw a SEGV for me, and I'm not sure if it's worth debugging <br>
at this point in time, because I might be missing something entirely <br>
different.<br>
<br>
Does anybody have an idea how this system could be solved in finite <br>
time, such that the method also scales to the three component problem?<br>
<br>
Thank you all very much in advance!<br>
<br>
Best regards<br>
Niclas<br>
<br>
</blockquote></div>