<html aria-label="message body"><head><meta http-equiv="content-type" content="text/html; charset=utf-8"></head><body style="overflow-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;"> I would be stunned and amazed if this worked. Sparse factorization codes use very complicated data structures to store the resulting "factors" and the solves are complicated code that traverse through the "factor" data structures to perform the solve. <div><br></div><div> Barry</div><div><br id="lineBreakAtBeginningOfMessage"><div><br><blockquote type="cite"><div>On Nov 22, 2025, at 6:58 AM, Yin Shi <yin.shi1@icloud.com> wrote:</div><br class="Apple-interchange-newline"><div><meta http-equiv="content-type" content="text/html; charset=utf-8"><div style="overflow-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;">Thank you very much for your reply. Given this, when using MUMPS in parallel, I can still get the factor matrix (using getFactorMatrix method of a PC object) and use it to do matrix multiplications (e.g., using matMult method of the factor matrix), correct? <span style="caret-color: rgb(0, 0, 0);">I also would like to confirm whether the factor matrix returned is really triangular and multiplying it with another matrix gives the intended result.</span><div><div><br><blockquote type="cite"><div>On Nov 16, 2025, at 08:59, Barry Smith <bsmith@petsc.dev> wrote:</div><br class="Apple-interchange-newline"><div><meta http-equiv="content-type" content="text/html; charset=utf-8"><div style="overflow-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;"> It appears that only MATSOLVERMKL_CPARDISO provides a parallel backward solve currently. <div><br></div><div> The only seperation of forward and backward solves in MUMPS appears to be provided with (from its users manual)</div><div><br></div><div><div style="margin: 0px; font-width: normal; font-size: 9px; line-height: normal; font-size-adjust: none; font-kerning: auto; font-variant-alternates: normal; font-variant-ligatures: normal; font-variant-numeric: normal; font-variant-east-asian: normal; font-variant-position: normal; font-feature-settings: normal; font-optical-sizing: auto; font-variation-settings: normal;">A special case is the one</div><div style="margin: 0px; font-width: normal; font-size: 9px; line-height: normal; font-size-adjust: none; font-kerning: auto; font-variant-alternates: normal; font-variant-ligatures: normal; font-variant-numeric: normal; font-variant-east-asian: normal; font-variant-position: normal; font-feature-settings: normal; font-optical-sizing: auto; font-variation-settings: normal;">where the forward elimination step is performed during factorization (see <span class="s1" style="color: rgb(0, 11, 108);">Subsection 3.8</span>), instead of</div><div style="margin: 0px; font-width: normal; font-size: 9px; line-height: normal; font-size-adjust: none; font-kerning: auto; font-variant-alternates: normal; font-variant-ligatures: normal; font-variant-numeric: normal; font-variant-east-asian: normal; font-variant-position: normal; font-feature-settings: normal; font-optical-sizing: auto; font-variation-settings: normal;">during the solve phase. This allows accessing the L factors right after they have been computed, with a</div><div style="margin: 0px; font-width: normal; font-size: 9px; line-height: normal; font-size-adjust: none; font-kerning: auto; font-variant-alternates: normal; font-variant-ligatures: normal; font-variant-numeric: normal; font-variant-east-asian: normal; font-variant-position: normal; font-feature-settings: normal; font-optical-sizing: auto; font-variation-settings: normal;">better locality, and can avoid writing the L factors to disk in an out-of-core context. In this case (forward</div><div style="margin: 0px; font-width: normal; font-size: 9px; line-height: normal; font-size-adjust: none; font-kerning: auto; font-variant-alternates: normal; font-variant-ligatures: normal; font-variant-numeric: normal; font-variant-east-asian: normal; font-variant-position: normal; font-feature-settings: normal; font-optical-sizing: auto; font-variation-settings: normal;"><br></div><div style="margin: 0px; font-width: normal; font-size: 9px; line-height: normal; font-size-adjust: none; font-kerning: auto; font-variant-alternates: normal; font-variant-ligatures: normal; font-variant-numeric: normal; font-variant-east-asian: normal; font-variant-position: normal; font-feature-settings: normal; font-optical-sizing: auto; font-variation-settings: normal;"><br></div><div><br><blockquote type="cite"><div>On Nov 15, 2025, at 9:17 AM, Yin Shi via petsc-users <petsc-users@mcs.anl.gov> wrote:</div><br class="Apple-interchange-newline"><div><div>Dear Developers,<br><br>In short, I need to explicitly use A.solveBackward(b, x) in parallel with petsc4py, where A is a Cholesky factored matrix, but it seems that this is not supported (e.g., for mumps and superlu_dist factorization solver backend). Is it possible to work around this?<br><br>In detail, the problem I need to solve is to generate a set of correlated random numbers (denoted by a vector, w) from an uncorrelated one (denoted by a vector n). Denote the covariance matrix of n as C (symmetric). One needs to first factorize C, C = L L^T, and then solve the linear system L^T w = n for w in parallel. Is it possible to reformulate this problem for it to be implemented using petsc4py?<br><br>Thank you!<br>Yin</div></div></blockquote></div><br></div></div></div></blockquote></div><br></div></div></div></blockquote></div><br></div></body></html>