<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<p>Hi Hong,</p>
<p>A_m would typically have a leading dimension between 6e5 and
1.5e6, with roughly 100 non-zero entries per row in average.</p>
<p>Don't get me wrong: performing ONE LU factorization is fine for
the memory. It's just that I need to keep track, and store M x LU
factorizations which obviously do not fit in RAM.</p>
<p>I thought that with MUMPS, each process is able to store its part
of the LU factors? Anyway, I do not care about the
non-scalability, I know saving/reading from disk is long. As long
as I solve the systems in parallel it's fine.<br>
</p>
<blockquote type="cite">
<div>If A_m is not huge, you can create one KSP, with matrix </div>
<div>A = diag(A_m).</div>
<div>Keep LU factor of A = diag( LU factor of A_m), then solve A x
= b repeatedly with changing b.</div>
<div>You can use '-pc_type bjacobi -pc_bjacobi_blocks M
-sub_pc_type lu'</div>
<div>Hong</div>
<div><br>
</div>
</blockquote>
<p>This is likely to fail as my matrices are ill-conditioned.<br>
</p>
<p>Thibaut</p>
<p><br>
</p>
<div class="moz-cite-prefix">On 23/07/2019 17:02, Zhang, Hong wrote:<br>
</div>
<blockquote type="cite"
cite="mid:CAGCphBt7woAnwG3zJqa84c3AunuAaRjYgpGJFVphG=448DfU5A@mail.gmail.com">
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<div dir="ltr">
<div dir="ltr">Thibaut :<br>
</div>
<div>How large is your A_m? Write/Read to/from the disc is
non-scalable. It would be the last resort when matrix factor
is too large for the memory. PETSc/mumps interface does not
support this feature.</div>
<div><br>
</div>
<div>If A_m is not huge, you can create one KSP, with matrix </div>
<div>A = diag(A_m).</div>
<div>Keep LU factor of A = diag( LU factor of A_m), then solve A
x = b repeatedly with changing b.</div>
<div>You can use '-pc_type bjacobi -pc_bjacobi_blocks M
-sub_pc_type lu'</div>
<div>Hong</div>
<div><br>
</div>
<div class="gmail_quote">
<blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left:1px solid
rgb(204,204,204);padding-left:1ex">
Dear PETSc users,<br>
<br>
I need to solve several linear systems successively, with LU
<br>
factorization, as part of an iterative process in my Fortran
application <br>
code.<br>
<br>
The process would solve M systems (A_m)(x_m,i) = (b_m,i) for
m=1,M at <br>
each iteration i, but computing the LU factorization of A_m
only once.<br>
The RHSs (b_m,i+1) are computed from all the different
(x_m,i) and all <br>
depend upon each other.<br>
<br>
The way I envisage to perform that is to use MUMPS to
compute, <br>
successively, each of the LU factorizations (m) in parallel
and store <br>
the factors on disk, creating/assembling/destroying the
matrices A_m on <br>
the go.<br>
Then whenever needed, read the factors in parallel to solve
the systems. <br>
Since version 5.2, MUMPS has a save/restore feature that
allows that, <br>
see <a
href="http://mumps.enseeiht.fr/doc/userguide_5.2.1.pdf"
rel="noreferrer" target="_blank" moz-do-not-send="true">
http://mumps.enseeiht.fr/doc/userguide_5.2.1.pdf</a> p.20,
24 and 58.<br>
<br>
In its current state, the PETSc/MUMPS interface does not
incorporate <br>
that feature. I'm an advanced Fortran programmer but not in
C so I don't <br>
think I would do an amazing job having a go inside <br>
src/mat/impls/aij/mpi/mumps/mumps.c.<br>
<br>
I was picturing something like creating as many KSP objects
as linear <br>
systems to be solved, with some sort of flag to force the
storage of LU <br>
factors on disk after the first call to KSPSolve. Then keep
calling <br>
KSPSolve as many times as needed.<br>
<br>
Would you support such a feature?<br>
<br>
Thanks for your support,<br>
<br>
Thibaut<br>
</blockquote>
</div>
</div>
</blockquote>
</body>
</html>