<div dir="ltr">I see. Thanks so much for the comment.</div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Mon, Oct 24, 2022 at 10:47 AM Jed Brown <<a href="mailto:jed@jedbrown.org">jed@jedbrown.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">You can get lucky with null spaces even with factorization preconditioners, especially if the right hand side is orthogonal to the null space. But it's fragile and you shouldn't rely on that being true as you change the problem. You can either remove the null space in your problem formulation (maybe) or use iterative solvers/fieldsplit preconditioning (which can use a direct solver on nonsingular blocks).<br>
<br>
Jau-Uei Chen <<a href="mailto:chenju@utexas.edu" target="_blank">chenju@utexas.edu</a>> writes:<br>
<br>
> To whom it may concern,<br>
><br>
> I am writing to ask about using PETSc with a direct solver to solve a<br>
> linear system where a single zero-value eigenvalue exists.<br>
><br>
> Currently, I am working on developing a finite-element solver for a<br>
> linearized incompressible MHD equation. The code is based on an open-source<br>
> library called MFEM which has its own wrapper for PETSc and is used in my<br>
> code. From analysis, I already know that the linear system (Ax=b) to be<br>
> solved is a saddle point system. By using the flags "solver_pc_type svd"<br>
> and "solver_pc_svd_monitor", I indeed observe it. Here is an example of an<br>
> output:<br>
><br>
> SVD: condition number 3.271390119581e+18, 1 of 66 singular values are<br>
> (nearly) zero<br>
> SVD: smallest singular values: 3.236925932523e-17 3.108788619412e-04<br>
> 3.840514506502e-04 4.599292003910e-04 4.909419974671e-04<br>
> SVD: largest singular values : 4.007319935079e+00 4.027759008411e+00<br>
> 4.817755760754e+00 4.176127583956e+01 1.058924751347e+02<br>
><br>
><br>
> However, What surprises me is that the numerical solutions are still<br>
> relatively accurate by comparing to the exact ones (i.e. manufactured<br>
> solutions) when I perform convergence tests even if I am using a direct<br>
> solver (i.e. -solver_ksp_type preonly -solver_pc_type lu<br>
> -solver_pc_factor_mat_solver_type<br>
> mumps). My question is: Why the direct solver won't break down in this<br>
> context? I understand that it won't be an issue for iterative solvers such<br>
> as GMRES [1][2] but not really sure why it won't cause trouble in direct<br>
> solvers.<br>
><br>
> Any comments or suggestions are greatly appreciated.<br>
><br>
> Best Regards,<br>
> Jau-Uei Chen<br>
><br>
> Reference:<br>
> [1] Benzi, Michele, et al. “Numerical Solution of Saddle Point Problems.”<br>
> Acta Numerica, vol. 14, May 2005, pp. 1–137. DOI.org (Crossref),<br>
> <a href="https://doi.org/10.1017/S0962492904000212" rel="noreferrer" target="_blank">https://doi.org/10.1017/S0962492904000212</a>.<br>
> [2] Elman, Howard C., et al. Finite Elements and Fast Iterative Solvers:<br>
> With Applications in Incompressible Fluid Dynamics. Second edition, Oxford<br>
> University Press, 2014.<br>
</blockquote></div>