<div dir="ltr">Thanks, I will give it a try.<div><br clear="all"><div><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr">Best wishes,<div>Zongze</div></div></div></div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sat, 4 Mar 2023 at 23:09, Pierre Jolivet <<a href="mailto:pierre@joliv.et">pierre@joliv.et</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="auto"><div dir="ltr"></div><div dir="ltr"><br></div><div dir="ltr"><br><blockquote type="cite">On 4 Mar 2023, at 3:26 PM, Zongze Yang <<a href="mailto:yangzongze@gmail.com" target="_blank">yangzongze@gmail.com</a>> wrote:<br><br></blockquote></div><blockquote type="cite"><div dir="ltr"><div dir="ltr"><div dir="ltr"><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sat, 4 Mar 2023 at 22:03, Pierre Jolivet <<a href="mailto:pierre@joliv.et" target="_blank">pierre@joliv.et</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><br><div><br><blockquote type="cite"><div>On 4 Mar 2023, at 2:51 PM, Zongze Yang <<a href="mailto:yangzongze@gmail.com" target="_blank">yangzongze@gmail.com</a>> wrote:</div><br><div><br><br style="font-family:Helvetica;font-size:12px;font-style:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none"><div class="gmail_quote" style="font-family:Helvetica;font-size:12px;font-style:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none"><div dir="ltr" class="gmail_attr">On Sat, 4 Mar 2023 at 21:37, Pierre Jolivet <<a href="mailto:pierre@joliv.et" target="_blank">pierre@joliv.et</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><br><br>> On 4 Mar 2023, at 2:30 PM, Zongze Yang <<a href="mailto:yangzongze@gmail.com" target="_blank">yangzongze@gmail.com</a>> wrote:<br>><span> </span><br>> Hi,<span> </span><br>><span> </span><br>> I am writing to seek your advice regarding a problem I encountered while using multigrid to solve a certain issue.<br>> I am currently using multigrid with the coarse problem solved by PCLU. However, the PC failed randomly with the error below (the value of INFO(2) may differ):<br>> ```shell<br>> [ 0] Error reported by MUMPS in numerical factorization phase: INFOG(1)=-9, INFO(2)=36<br>> ```<br>><span> </span><br>> Upon checking the documentation of MUMPS, I discovered that increasing the value of ICNTL(14) may help resolve the issue. Specifically, I set the option -mat_mumps_icntl_14 to a higher value (such as 40), and the error seemed to disappear after I set the value of ICNTL(14) to 80. However, I am still curious as to why MUMPS failed randomly in the first place.<br>><span> </span><br>> Upon further inspection, I found that the number of nonzeros of the PETSc matrix and the MUMPS matrix were different every time I ran the code. I am now left with the following questions:<br>><span> </span><br>> 1. What could be causing the number of nonzeros of the MUMPS matrix to change every time I run the code?<br><br>Is the Mat being fed to MUMPS distributed on a communicator of size greater than one?<br>If yes, then, depending on the pivoting and the renumbering, you may get non-deterministic results.<br></blockquote><div> </div><div>Hi, Pierre,</div><div>Thank you for your prompt reply. Yes, the size of the communicator is greater than one. </div><div>Even if the size of the communicator is equal, are the results still non-deterministic?</div></div></div></blockquote><div><br></div><div>In the most general case, yes.</div><br><blockquote type="cite"><div><div class="gmail_quote" style="font-family:Helvetica;font-size:12px;font-style:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none"><div> Can I assume the Mat being fed to MUMPS is the same in this case?</div></div></div></blockquote><div><br></div><div>Are you doing algebraic or geometric multigrid?</div><div>Are the prolongation operators computed by Firedrake or by PETSc, e.g., through GAMG?<br></div><div>If it’s the latter, I believe the Mat being fed to MUMPS should always be the same.</div><div>If it’s the former, you’ll have to ask the Firedrake people if there may be non-determinism in the coarsening process.</div></div></div></blockquote><div><br></div><div>I am using geometric multigrid, and the prolongation operators, I think, are computed by Firedrake. </div><div>Thanks for your suggestion, I will ask the Firedrake people.</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><div><br><blockquote type="cite"><div><div class="gmail_quote" style="font-family:Helvetica;font-size:12px;font-style:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none"><div>Is the pivoting and renumbering all done by MUMPS other than PETSc?</div></div></div></blockquote><div><br></div><div>You could provide your own numbering, but by default, this is outsourced to MUMPS indeed, which will itself outsourced this to METIS, AMD, etc.</div></div></div></blockquote><div><br></div><div>I think I won't do this.</div><div>By the way, does the result of superlu_dist have a similar non-deterministic?</div></div></div></div></blockquote><div><br></div><div>SuperLU_DIST uses static pivoting as far as I know, so it may be more deterministic.</div><div><br></div><div>Thanks,</div><div>Pierre</div><br><blockquote type="cite"><div dir="ltr"><div dir="ltr"><div class="gmail_quote"><div>Thanks,</div><div>Zongze</div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><div><div><br></div><div>Thanks,</div><div>Pierre</div><div><br></div><blockquote type="cite"><div><div class="gmail_quote" style="font-family:Helvetica;font-size:12px;font-style:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><br>> 2. Why is the number of nonzeros of the MUMPS matrix significantly greater than that of the PETSc matrix (as seen in the output of ksp_view, 115025949 vs 7346177)?<br><br>Exact factorizations introduce fill-in.<br>The number of nonzeros you are seeing for MUMPS is the number of nonzeros in the factors.<br><br>> 3. Is it possible that the varying number of nonzeros of the MUMPS matrix is the cause of the random failure?<br><br>Yes, MUMPS uses dynamic scheduling, which will depend on numerical pivoting, and which may generate factors with different number of nonzeros.<br></blockquote><div><br></div><div>Got it. Thank you for your clear explanation.</div><div>Zongze </div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><br>Thanks,<br>Pierre</blockquote><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><br>> I have attached a test example written in Firedrake. The output of `ksp_view` after running the code twice is included below for your reference.<br>> In the output, the number of nonzeros of the MUMPS matrix was 115025949 and 115377847, respectively, while that of the PETSc matrix was only 7346177.<br>><span> </span><br>> ```shell<br>> (complex-int32-mkl) $ mpiexec -n 32 python test_mumps.py -ksp_view ::ascii_info_detail | grep -A3 "type: "<br>> type: preonly<br>> maximum iterations=10000, initial guess is zero<br>> tolerances: relative=1e-05, absolute=1e-50, divergence=10000.<br>> left preconditioning<br>> --<br>> type: lu<br>> out-of-place factorization<br>> tolerance for zero pivot 2.22045e-14<br>> matrix ordering: external<br>> --<br>> type: mumps<br>> rows=1050625, cols=1050625<br>> package used to perform factorization: mumps<br>> total: nonzeros=115025949, allocated nonzeros=115025949<br>> --<br>> type: mpiaij<br>> rows=1050625, cols=1050625<br>> total: nonzeros=7346177, allocated nonzeros=7346177<br>> total number of mallocs used during MatSetValues calls=0<br>> (complex-int32-mkl) $ mpiexec -n 32 python test_mumps.py -ksp_view ::ascii_info_detail | grep -A3 "type: "<br>> type: preonly<br>> maximum iterations=10000, initial guess is zero<br>> tolerances: relative=1e-05, absolute=1e-50, divergence=10000.<br>> left preconditioning<br>> --<br>> type: lu<br>> out-of-place factorization<br>> tolerance for zero pivot 2.22045e-14<br>> matrix ordering: external<br>> --<br>> type: mumps<br>> rows=1050625, cols=1050625<br>> package used to perform factorization: mumps<br>> total: nonzeros=115377847, allocated nonzeros=115377847<br>> --<br>> type: mpiaij<br>> rows=1050625, cols=1050625<br>> total: nonzeros=7346177, allocated nonzeros=7346177<br>> total number of mallocs used during MatSetValues calls=0<br>> ```<br>><span> </span><br>> I would greatly appreciate any insights you may have on this matter. Thank you in advance for your time and assistance.<br>><span> </span><br>> Best wishes,<br>> Zongze<br>> <test_mumps.py></blockquote></div></div></blockquote></div><br></div></blockquote></div></div>
</div></blockquote></div></blockquote></div>