<div dir="ltr">Evan,<div>I start looking at petsc/<span style="font-size:12.8px">STRUMPACK-sparse interface. </span></div><div><span style="font-size:12.8px"><br></span></div><div><span style="font-size:12.8px">Currently, the interface supports preconditioners: lu and ilu. </span></div><div><span style="font-size:12.8px">First, you configure petsc with '--download-strumpack --with-cxx-dialect=C++11' (some external packages are also required).</span></div><div><span style="font-size:12.8px"><br></span></div><div><span style="font-size:12.8px">After building petsc, I tested petsc/src/ksp/ksp/examples/tutorials/ex2.c:</span></div><div><div><span style="font-size:12.8px">mpiexec -n 4 ./ex2 -pc_type ilu -pc_factor_mat_solver_package strumpack -mat_strumpack_hssminsize 2 -ksp_monitor</span></div><div><span style="font-size:12.8px"># WARNING STRUMPACK: There were unrecognized options.</span></div><div><span style="font-size:12.8px"> 0 KSP Residual norm 7.459478705273e+00</span></div><div><span style="font-size:12.8px"> 1 KSP Residual norm 1.525843261155e-02</span></div><div><span style="font-size:12.8px"> 2 KSP Residual norm 6.672464933691e-05</span></div><div><span style="font-size:12.8px">Norm of error 6.61922e-05 iterations 2</span></div></div><div><span style="font-size:12.8px"><br></span></div><div><span style="font-size:12.8px">using options '-help', I see </span><span style="font-size:12.8px">strumpack</span><span style="font-size:12.8px"> supports two preconditioners, lu and ilu, with following few options:</span></div><div><div><span style="font-size:12.8px">$ mpiexec -n 4 ./ex2 -pc_type ilu -pc_factor_mat_solver_package strumpack -mat_strumpack_hssminsize 2 -ksp_monitor -h |grep strumpack</span></div><div><span style="font-size:12.8px"> -mat_strumpack_verbose: <FALSE> Print STRUMPACK information (None)</span></div><div><span style="font-size:12.8px"> -mat_strumpack_rctol <0.01>: Relative compression tolerance (None)</span></div><div><span style="font-size:12.8px"> -mat_strumpack_colperm: <TRUE> Find a col perm to get nonzero diagonal (None)</span></div><div><span style="font-size:12.8px"> -mat_strumpack_hssminsize <2500>: Minimum size of dense block for HSS compression </span></div></div><div><span style="font-size:12.8px"><br></span></div><div><span style="font-size:12.8px">My guess is that </span><span style="font-size:12.8px">hss is what you mentioned '</span><span style="font-size:12.8px">low rank approximation' and </span><span style="font-size:12.8px"> '</span><span style="font-size:12.8px">-mat_strumpack_hssminsize' is the block size (excuse me for ignorant of HSS).</span></div><div><span style="font-size:12.8px">If this is true, you requested feature is already in our interface. Otherwise, please send me more detailed info about your request. The </span><span style="font-size:12.8px">STRUMPACK </span><span style="font-size:12.8px">interface was contributed by an user, so I have to learn it for supporting it. </span></div><div><span style="font-size:12.8px"><br></span></div><div><span style="font-size:12.8px">Hong</span></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Oct 10, 2017 at 2:23 PM, Evan Um <span dir="ltr"><<a href="mailto:evanum@gmail.com" target="_blank">evanum@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Hi Hong,<div><br></div><div>Thanks for your help. I am testing if I can use MUMPS block low rank factors as a preconditioner for QMR in MUMPS framework. I would like to ask one more question. STRUMPACK (<a href="http://portal.nersc.gov/project/sparse/strumpack/" target="_blank">http://portal.nersc.gov/<wbr>project/sparse/strumpack/</a>) also supports low rank approximation. Can PETSC also allow users to use the approximate factors as a preconditioner in PETSC-QMR?</div><div><div><br></div><div>Best,</div><div>Evan</div><div><br></div></div></div><div class="gmail_extra"><br><div class="gmail_quote"><span class="">On Tue, Oct 3, 2017 at 8:34 AM, Hong <span dir="ltr"><<a href="mailto:hzhang@mcs.anl.gov" target="_blank">hzhang@mcs.anl.gov</a>></span> wrote:<br></span><div><div class="h5"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Evan,<div>ICNTL(35) and CNTL(7) are added to petsc-mumps interface in branch </div><div>hzhang/update-mumps-5.1.1-cntl</div><div><br></div><div>You may give it a try. Once it passes our regression tests, I'll merge it to petsc master branch.</div><span class="m_-8729706842368820723HOEnZb"><font color="#888888"><div><br></div></font></span><div><span class="m_-8729706842368820723HOEnZb"><font color="#888888">Hong</font></span><div><div class="m_-8729706842368820723h5"><br><div class="gmail_extra"><br><div class="gmail_quote">On Sun, Sep 24, 2017 at 8:08 PM, Hong <span dir="ltr"><<a href="mailto:hzhang@mcs.anl.gov" target="_blank">hzhang@mcs.anl.gov</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">I'll check it.<span class="m_-8729706842368820723m_7549400270970232919gmail-HOEnZb"><font color="#888888"><div>Hong</div></font></span></div><div class="m_-8729706842368820723m_7549400270970232919gmail-HOEnZb"><div class="m_-8729706842368820723m_7549400270970232919gmail-h5"><div class="gmail_extra"><br><div class="gmail_quote">On Sun, Sep 24, 2017 at 3:42 PM, Evan Um <span dir="ltr"><<a href="mailto:evanum@gmail.com" target="_blank">evanum@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">Hi Barry,<div><br></div><div>Thanks for your comments. To activate block low rank (BLR) approximation in MUMPS version 5.1.1, a user needs to turn on the functionality (i.e. ICNTL(35)=1) and specify the tolerance value (e.g. CNTL(7)=1e-4). In PETSC, I think that we can set up ICNTL and CNTL parameters for MUMPS. I was wondering if we can still use BLR approximation for a preconditioner for Krylov solvers.</div><div><br></div><div>Best,</div><div>Evan</div><div> </div></div><div class="m_-8729706842368820723m_7549400270970232919gmail-m_8252441951155370891HOEnZb"><div class="m_-8729706842368820723m_7549400270970232919gmail-m_8252441951155370891h5"><div class="gmail_extra"><br><div class="gmail_quote">On Sat, Sep 23, 2017 at 6:45 PM, Barry Smith <span dir="ltr"><<a href="mailto:bsmith@mcs.anl.gov" target="_blank">bsmith@mcs.anl.gov</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><span><br>
> On Sep 23, 2017, at 8:38 PM, Evan Um <<a href="mailto:evanum@gmail.com" target="_blank">evanum@gmail.com</a>> wrote:<br>
><br>
> Dear PETSC Users,<br>
><br>
> My system matrix comes from finite element modeling and is complex and unstructured. Its typical size is a few millions-by a few millions. I wondering if I can use MUMPS parallel direct solver as a preconditioner in PETSC. For example, I want to pass factored matrices to Krylov iterative solvers such as QMR. Is there any PETSC+MUMPS example code for the purpose?<br>
<br>
</span> You don't pass factored matrices you just pass the original matrix and use -pc_type lu -pc_factor_mat_solver_package mumps<br>
<span><br>
> Can PETSC call the latest MUMPS that supports block low rank approximation?<br>
<br>
</span> No, send us info on it and we'll see if we can add an interface<br>
<div class="m_-8729706842368820723m_7549400270970232919gmail-m_8252441951155370891m_-5478992268712916641HOEnZb"><div class="m_-8729706842368820723m_7549400270970232919gmail-m_8252441951155370891m_-5478992268712916641h5"><br>
<br>
><br>
> In advance, thank you very much for your comments.<br>
><br>
> Best,<br>
> Evan<br>
><br>
><br>
><br>
><br>
><br>
<br>
</div></div></blockquote></div><br></div>
</div></div></blockquote></div><br></div>
</div></div></blockquote></div><br></div></div></div></div></div>
</blockquote></div></div></div><br></div>
</blockquote></div><br></div>