<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">On Wed, Apr 30, 2014 at 6:19 AM, Justin Dong <span dir="ltr"><<a href="mailto:jsd1@rice.edu" target="_blank">jsd1@rice.edu</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div dir="ltr">Thanks. If I turn on the Krylov solver, the issue still seems to persist though.<div><br></div><div>mpiexec -n 4 -ksp_type gmres -ksp_rtol 1.0e-13 -pc_type lu -pc_factor_mat_solver_package superlu_dist<br></div>
<div><br></div><div>I'm testing on a very small system now (24 ndofs), but if I go larger (around 20000 ndofs) then it gets worse.</div><div><br></div><div>For the small system, I exported the matrices to matlab to make sure they were being assembled correct in parallel, and I'm certain that that they are.</div>
</div></blockquote><div><br></div><div>For convergence questions, always run using -ksp_monitor -ksp_view so that we can see exactly what you run.</div><div><br></div><div> Thanks,</div><div><br></div><div> Matt</div>
<div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Apr 30, 2014 at 5:32 AM, Matthew Knepley <span dir="ltr"><<a href="mailto:knepley@gmail.com" target="_blank">knepley@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><div>On Wed, Apr 30, 2014 at 3:02 AM, Justin Dong <span dir="ltr"><<a href="mailto:jsd1@rice.edu" target="_blank">jsd1@rice.edu</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div dir="ltr">I actually was able to solve my own problem...for some reason, I need to do <div><br><div><div>PCSetType(pc, PCLU);</div><div>PCFactorSetMatSolverPackage(pc, MATSOLVERSUPERLU_DIST);</div><div>KSPSetTolerances(ksp, 1.e-15, PETSC_DEFAULT, PETSC_DEFAULT, PETSC_DEFAULT);</div>
</div></div></div></blockquote><div><br></div></div><div>1) Before you do SetType(PCLU) the preconditioner has no type, so FactorSetMatSolverPackage() has no effect</div><div><br></div><div>2) There is a larger issue here. Never ever ever ever code in this way. Hardcoding a solver is crazy. The solver you</div>
<div> use should depend on the equation, discretization, flow regime, and architecture. Recompiling for all those is</div><div> out of the question. You should just use</div><div><br></div><div> KSPCreate()</div>
<div> KSPSetOperators()</div><div> KSPSetFromOptions()</div><div> KSPSolve()</div><div><br></div><div>and then</div><div><br></div><div> -pc_type lu -pc_factor_mat_solver_package superlu_dist</div><div>
<div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div dir="ltr"><div><div>
</div></div><div><br></div><div>instead of the ordering I initially had, though I'm not really clear on what the issue was. However, there seems to be some loss of accuracy as I increase the number of processes. Is this expected, or can I force a lower tolerance somehow? I am able to compare the solutions to a reference solution, and the error increases as I increase the processes. This is the solution in sequential:</div>
</div></blockquote><div><br></div></div><div>Yes, this is unavoidable. However, just turn on the Krylov solver</div><div><br></div><div> -ksp_type gmres -ksp_rtol 1.0e-10</div><div><br></div><div>and you can get whatever residual tolerance you want. To get a specific error, you would need</div>
<div>a posteriori error estimation, which you could include in a custom convergence criterion.</div><div><br></div><div> Thanks,</div><div><br></div><div> Matt</div><div><div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div dir="ltr"><div><div>superlu_1process = [</div><div>-6.8035811950925553e-06</div><div>1.6324030474375778e-04</div><div>5.4145340579614926e-02</div><div>1.6640521936281516e-04</div><div>-1.7669374392923965e-04</div><div>
-2.8099208957838207e-04</div><div>5.3958133511222223e-02</div><div>-5.4077899123806263e-02</div><div>-5.3972905090366369e-02</div><div>-1.9485020474821160e-04</div><div>5.4239813043824400e-02</div><div>4.4883984259948430e-04];</div>
</div><div><br></div><div><div>superlu_2process = [</div><div>-6.8035811950509821e-06</div><div>1.6324030474371623e-04</div><div>5.4145340579605655e-02</div><div>1.6640521936281687e-04</div><div>-1.7669374392923807e-04</div>
<div>-2.8099208957839834e-04</div><div>5.3958133511212911e-02</div><div>-5.4077899123796964e-02</div><div>-5.3972905090357078e-02</div><div>-1.9485020474824480e-04</div><div>5.4239813043815172e-02</div><div>4.4883984259953320e-04];</div>
</div><div><br></div><div>superlu_4process= [<br></div><div><div>-6.8035811952565206e-06</div><div>1.6324030474386164e-04</div><div>5.4145340579691455e-02</div><div>1.6640521936278326e-04</div><div>-1.7669374392921441e-04</div>
<div>-2.8099208957829171e-04</div><div>5.3958133511299078e-02</div><div>-5.4077899123883062e-02</div><div>-5.3972905090443085e-02</div><div>-1.9485020474806352e-04</div><div>5.4239813043900860e-02</div><div>4.4883984259921287e-04];</div>
</div><div><br></div><div>This is some finite element solution and I can compute the error of the solution against an exact solution in the functional L2 norm:</div><div><br></div><div>error with 1 process: 1.71340e-02 (accepted value)</div>
<div>error with 2 processes: 2.65018e-02 </div><div>error with 3 processes: 3.00164e-02 </div>
<div>error with 4 processes: 3.14544e-02 </div><div><br></div><div><br></div><div>Is there a way to remedy this?</div>
</div><div><div><div class="gmail_extra"><br><br><div class="gmail_quote">On Wed, Apr 30, 2014 at 2:37 AM, Justin Dong <span dir="ltr"><<a href="mailto:jsd1@rice.edu" target="_blank">jsd1@rice.edu</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div dir="ltr">Hi,<div><br></div><div>I'm trying to solve a linear system in parallel using SuperLU but for some reason, it's not giving me the correct solution. I'm testing on a small example so I can compare the sequential and parallel cases manually. I'm absolutely sure that my routine for generating the matrix and right-hand side in parallel is working correctly.</div>
<div><br></div><div>Running with 1 process and PCLU gives the correct solution. Running with 2 processes and using SUPERLU_DIST does not give the correct solution (I tried with 1 process too but according to the superlu documentation, I would need SUPERLU for sequential?). This is the code for solving the system:</div>
<div><br></div><div><div> /* solve the system */</div><div><span style="white-space:pre-wrap"> </span>KSPCreate(PETSC_COMM_WORLD, &ksp);</div><div><span style="white-space:pre-wrap"> </span>KSPSetOperators(ksp, Aglobal, Aglobal, DIFFERENT_NONZERO_PATTERN);</div>
<div><span style="white-space:pre-wrap"> </span>KSPSetType(ksp,KSPPREONLY);</div><div><br></div><div><span style="white-space:pre-wrap"> </span>KSPGetPC(ksp, &pc);</div><div><br></div><div><span style="white-space:pre-wrap"> </span>KSPSetTolerances(ksp, 1.e-13, PETSC_DEFAULT, PETSC_DEFAULT, PETSC_DEFAULT);</div>
<div><span style="white-space:pre-wrap"> </span>PCFactorSetMatSolverPackage(pc, MATSOLVERSUPERLU_DIST);</div><div><br></div><div><span style="white-space:pre-wrap"> </span>KSPSolve(ksp, bglobal, bglobal);</div></div>
<div><br></div><div>Sincerely,</div><div>Justin</div><div><br></div><div><br></div></div>
</blockquote></div><br></div>
</div></div></blockquote></div></div></div><span><font color="#888888"><br><br clear="all"><span class="HOEnZb"><font color="#888888"><div><br></div>-- <br>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>
-- Norbert Wiener
</font></span></font></span></div></div>
</blockquote></div><br></div>
</blockquote></div><br><br clear="all"><div><br></div>-- <br>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>
-- Norbert Wiener
</div></div>