<div dir="ltr"><div dir="ltr">On Mon, May 22, 2023 at 10:42 PM Zongze Yang <<a href="mailto:yangzongze@gmail.com">yangzongze@gmail.com</a>> wrote:<br></div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><div dir="ltr">On Tue, 23 May 2023 at 05:31, Stefano Zampini <<a href="mailto:stefano.zampini@gmail.com" target="_blank">stefano.zampini@gmail.com</a>> wrote:<br></div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">If I may add to the discussion, it may be that you are going OOM since you are trying to factorize a 3 million dofs problem, this problem goes undetected and then fails at a later stage</div></blockquote><div> </div><div>Thank you for your comment. I ran the problem with 90 processes distributed across three nodes, each equipped with 500G of memory. If this amount of memory is sufficient for solving the matrix with approximately 3 million degrees of freedom?</div></div></div></div></blockquote><div><br></div><div>It really depends on the fill. Suppose that you get 1% fill, then</div><div><br></div><div> (3e6)^2 * 0.01 * 8 = 1e12 B</div><div><br></div><div>and you have 1.5e12 B, so I could easily see running out of memory.</div><div><br></div><div> Thanks,</div><div><br></div><div> Matt</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><div class="gmail_quote"><div>Thanks!</div><div>Zongze</div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div class="gmail_quote"><div dir="ltr" class="gmail_attr">Il giorno lun 22 mag 2023 alle ore 20:03 Zongze Yang <<a href="mailto:yangzongze@gmail.com" target="_blank">yangzongze@gmail.com</a>> ha scritto:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div>Thanks!</div><div dir="auto"><br></div><div><span style="border-color:rgb(0,0,0);color:rgb(0,0,0)">Zongze</span></div><div><span style="border-color:rgb(0,0,0);color:rgb(0,0,0)"><br></span></div><div style="background-color:rgba(0,0,0,0);border-color:rgb(255,255,255)"><div class="gmail_quote"><div dir="ltr" class="gmail_attr">Matthew Knepley <<a href="mailto:knepley@gmail.com" target="_blank">knepley@gmail.com</a>>于2023年5月23日 周二00:09写道:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr">On Mon, May 22, 2023 at 11:07 AM Zongze Yang <<a href="mailto:yangzongze@gmail.com" target="_blank">yangzongze@gmail.com</a>> wrote:<br></div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div>Hi,<br></div><div><br></div><div><div>I hope this letter finds you well. I am writing to seek guidance regarding an error I encountered while solving a matrix using MUMPS on multiple nodes:</div></div></div></div></div></div></div></blockquote><div><br></div><div>Iprobe is buggy on several MPI implementations. PETSc has an option for shutting it off for this reason.</div><div>I do not know how to shut it off inside MUMPS however. I would mail their mailing list to see.</div><div><br></div><div> Thanks,</div><div><br></div><div> Matt</div></div></div><div dir="ltr"><div class="gmail_quote"><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div><div>```bash</div><div><div>Abort(1681039) on node 60 (rank 60 in comm 240): Fatal error in PMPI_Iprobe: Other MPI error, error stack:</div><div>PMPI_Iprobe(124)..............: MPI_Iprobe(src=MPI_ANY_SOURCE, tag=MPI_ANY_TAG, comm=0xc4000026, flag=0x7ffc130f9c4c, status=0x7ffc130f9e80) failed</div><div>MPID_Iprobe(240)..............:</div><div>MPIDI_iprobe_safe(108)........:</div><div>MPIDI_iprobe_unsafe(35).......:</div><div>MPIDI_OFI_do_iprobe(69).......:</div><div>MPIDI_OFI_handle_cq_error(949): OFI poll failed (ofi_events.c:951:MPIDI_OFI_handle_cq_error:Input/output error)</div><div>Assertion failed in file src/mpid/ch4/netmod/ofi/ofi_events.c at line 125: 0</div></div><div>```</div><div><br></div><div>The matrix in question has a degree of freedom (dof) of 3.86e+06. Interestingly, when solving smaller-scale problems, everything functions perfectly without any issues. However, when attempting to solve the larger matrix on multiple nodes, I encounter the aforementioned error.</div><div><br></div></div><div><div>The complete error message I received is as follows:</div><div>```bash</div><div>Abort(1681039) on node 60 (rank 60 in comm 240): Fatal error in PMPI_Iprobe: Other MPI error, error stack:</div><div>PMPI_Iprobe(124)..............: MPI_Iprobe(src=MPI_ANY_SOURCE, tag=MPI_ANY_TAG, comm=0xc4000026, flag=0x7ffc130f9c4c, status=0x7ffc130f9e80) failed</div><div>MPID_Iprobe(240)..............:</div><div>MPIDI_iprobe_safe(108)........:</div><div>MPIDI_iprobe_unsafe(35).......:</div><div>MPIDI_OFI_do_iprobe(69).......:</div><div>MPIDI_OFI_handle_cq_error(949): OFI poll failed (ofi_events.c:951:MPIDI_OFI_handle_cq_error:Input/output error)</div><div>Assertion failed in file src/mpid/ch4/netmod/ofi/ofi_events.c at line 125: 0</div><div>/nfs/opt/cascadelake/linux-centos7-cascadelake/gcc-9.4.0/mpich-3.4.2-qgtz76gekvjzuacy7wq5a26rqlewoxfc/lib/libmpi.so.12(MPL_backtrace_show+0x26) [0x7f6076063f2c]</div><div>/nfs/opt/cascadelake/linux-centos7-cascadelake/gcc-9.4.0/mpich-3.4.2-qgtz76gekvjzuacy7wq5a26rqlewoxfc/lib/libmpi.so.12(+0x41dc24) [0x7f6075fc5c24]</div><div>/nfs/opt/cascadelake/linux-centos7-cascadelake/gcc-9.4.0/mpich-3.4.2-qgtz76gekvjzuacy7wq5a26rqlewoxfc/lib/libmpi.so.12(+0x49cc51) [0x7f6076044c51]</div><div>/nfs/opt/cascadelake/linux-centos7-cascadelake/gcc-9.4.0/mpich-3.4.2-qgtz76gekvjzuacy7wq5a26rqlewoxfc/lib/libmpi.so.12(+0x49f799) [0x7f6076047799]</div><div>/nfs/opt/cascadelake/linux-centos7-cascadelake/gcc-9.4.0/mpich-3.4.2-qgtz76gekvjzuacy7wq5a26rqlewoxfc/lib/libmpi.so.12(+0x451e18) [0x7f6075ff9e18]</div><div>/nfs/opt/cascadelake/linux-centos7-cascadelake/gcc-9.4.0/mpich-3.4.2-qgtz76gekvjzuacy7wq5a26rqlewoxfc/lib/libmpi.so.12(+0x452272) [0x7f6075ffa272]</div><div>/nfs/opt/cascadelake/linux-centos7-cascadelake/gcc-9.4.0/mpich-3.4.2-qgtz76gekvjzuacy7wq5a26rqlewoxfc/lib/libmpi.so.12(+0x2ce836) [0x7f6075e76836]</div><div>/nfs/opt/cascadelake/linux-centos7-cascadelake/gcc-9.4.0/mpich-3.4.2-qgtz76gekvjzuacy7wq5a26rqlewoxfc/lib/libmpi.so.12(+0x2ce90d) [0x7f6075e7690d]</div><div>/nfs/opt/cascadelake/linux-centos7-cascadelake/gcc-9.4.0/mpich-3.4.2-qgtz76gekvjzuacy7wq5a26rqlewoxfc/lib/libmpi.so.12(+0x48137b) [0x7f607602937b]</div><div>/nfs/opt/cascadelake/linux-centos7-cascadelake/gcc-9.4.0/mpich-3.4.2-qgtz76gekvjzuacy7wq5a26rqlewoxfc/lib/libmpi.so.12(+0x44d471) [0x7f6075ff5471]</div><div>/nfs/opt/cascadelake/linux-centos7-cascadelake/gcc-9.4.0/mpich-3.4.2-qgtz76gekvjzuacy7wq5a26rqlewoxfc/lib/libmpi.so.12(+0x407acd) [0x7f6075fafacd]</div><div>/nfs/opt/cascadelake/linux-centos7-cascadelake/gcc-9.4.0/mpich-3.4.2-qgtz76gekvjzuacy7wq5a26rqlewoxfc/lib/libmpi.so.12(MPIR_Err_return_comm+0x10a) [0x7f6075fafbea]</div><div>/nfs/opt/cascadelake/linux-centos7-cascadelake/gcc-9.4.0/mpich-3.4.2-qgtz76gekvjzuacy7wq5a26rqlewoxfc/lib/libmpi.so.12(MPI_Iprobe+0x312) [0x7f6075ddd542]</div><div>/nfs/opt/cascadelake/linux-centos7-cascadelake/gcc-9.4.0/mpich-3.4.2-qgtz76gekvjzuacy7wq5a26rqlewoxfc/lib/libmpifort.so.12(pmpi_iprobe+0x2f) [0x7f606e08f19f]</div><div>/nfs/home/zzyang/opt/software/linux-centos7-cascadelake/gcc-9.4.0/mumps-5.5.1-gb7wlwxwbalf5rw5vkp6gtkhfkdqpntz/lib/libzmumps.so(__zmumps_load_MOD_zmumps_load_recv_msgs+0x142) [0x7f60737b194d]</div><div>/nfs/home/zzyang/opt/software/linux-centos7-cascadelake/gcc-9.4.0/mumps-5.5.1-gb7wlwxwbalf5rw5vkp6gtkhfkdqpntz/lib/libzmumps.so(zmumps_try_recvtreat_+0x34) [0x7f60738ab735]</div><div>/nfs/home/zzyang/opt/software/linux-centos7-cascadelake/gcc-9.4.0/mumps-5.5.1-gb7wlwxwbalf5rw5vkp6gtkhfkdqpntz/lib/libzmumps.so(__zmumps_fac_par_m_MOD_zmumps_fac_par+0x991) [0x7f607378bcc8]</div><div>/nfs/home/zzyang/opt/software/linux-centos7-cascadelake/gcc-9.4.0/mumps-5.5.1-gb7wlwxwbalf5rw5vkp6gtkhfkdqpntz/lib/libzmumps.so(zmumps_fac_par_i_+0x240) [0x7f6073881d36]</div><div>Abort(805938831) on node 51 (rank 51 in comm 240): Fatal error in PMPI_Iprobe: Other MPI error, error stack:</div><div>PMPI_Iprobe(124)..............: MPI_Iprobe(src=MPI_ANY_SOURCE, tag=MPI_ANY_TAG, comm=0xc4000017, flag=0x7ffe20e1402c, status=0x7ffe20e14260) failed</div><div>MPID_Iprobe(244)..............:</div><div>progress_test(100)............:</div><div>MPIDI_OFI_handle_cq_error(949): OFI poll failed (ofi_events.c:951:MPIDI_OFI_handle_cq_error:Input/output error)</div><div>/nfs/home/zzyang/opt/software/linux-centos7-cascadelake/gcc-9.4.0/mumps-5.5.1-gb7wlwxwbalf5rw5vkp6gtkhfkdqpntz/lib/libzmumps.so(zmumps_fac_b_+0x1463) [0x7f60738831a1]</div><div>/nfs/home/zzyang/opt/software/linux-centos7-cascadelake/gcc-9.4.0/mumps-5.5.1-gb7wlwxwbalf5rw5vkp6gtkhfkdqpntz/lib/libzmumps.so(zmumps_fac_driver_+0x6969) [0x7f60738446c9]</div><div>/nfs/home/zzyang/opt/software/linux-centos7-cascadelake/gcc-9.4.0/mumps-5.5.1-gb7wlwxwbalf5rw5vkp6gtkhfkdqpntz/lib/libzmumps.so(zmumps_+0x2d83) [0x7f60738bf9cf]</div><div>/nfs/home/zzyang/opt/software/linux-centos7-cascadelake/gcc-9.4.0/mumps-5.5.1-gb7wlwxwbalf5rw5vkp6gtkhfkdqpntz/lib/libzmumps.so(zmumps_f77_+0x178c) [0x7f60738c33bc]</div><div>/nfs/home/zzyang/opt/software/linux-centos7-cascadelake/gcc-9.4.0/mumps-5.5.1-gb7wlwxwbalf5rw5vkp6gtkhfkdqpntz/lib/libzmumps.so(zmumps_c+0x8f8) [0x7f60738baacb]</div><div>/nfs/home/zzyang/opt/software/linux-centos7-cascadelake/gcc-9.4.0/petsc-develop-5wrc3y6lyelr3iyrlm3sr2jlh2wxif3k/lib/libpetsc.so.3.019(+0x894560) [0x7f6077297560]</div><div>/nfs/home/zzyang/opt/software/linux-centos7-cascadelake/gcc-9.4.0/petsc-develop-5wrc3y6lyelr3iyrlm3sr2jlh2wxif3k/lib/libpetsc.so.3.019(MatLUFactorNumeric+0x32e) [0x7f60773bb1e6]</div><div>/nfs/home/zzyang/opt/software/linux-centos7-cascadelake/gcc-9.4.0/petsc-develop-5wrc3y6lyelr3iyrlm3sr2jlh2wxif3k/lib/libpetsc.so.3.019(+0xf51665) [0x7f6077954665]</div><div>/nfs/home/zzyang/opt/software/linux-centos7-cascadelake/gcc-9.4.0/petsc-develop-5wrc3y6lyelr3iyrlm3sr2jlh2wxif3k/lib/libpetsc.so.3.019(PCSetUp+0x64b) [0x7f60779c77e0]</div><div>/nfs/home/zzyang/opt/software/linux-centos7-cascadelake/gcc-9.4.0/petsc-develop-5wrc3y6lyelr3iyrlm3sr2jlh2wxif3k/lib/libpetsc.so.3.019(KSPSetUp+0xfb6) [0x7f6077ac2d53]</div><div>/nfs/home/zzyang/opt/software/linux-centos7-cascadelake/gcc-9.4.0/petsc-develop-5wrc3y6lyelr3iyrlm3sr2jlh2wxif3k/lib/libpetsc.so.3.019(+0x10c1c28) [0x7f6077ac4c28]</div><div>/nfs/home/zzyang/opt/software/linux-centos7-cascadelake/gcc-9.4.0/petsc-develop-5wrc3y6lyelr3iyrlm3sr2jlh2wxif3k/lib/libpetsc.so.3.019(KSPSolve+0x13) [0x7f6077ac8070]</div><div>/nfs/home/zzyang/opt/software/linux-centos7-cascadelake/gcc-9.4.0/petsc-develop-5wrc3y6lyelr3iyrlm3sr2jlh2wxif3k/lib/libpetsc.so.3.019(+0x11249df) [0x7f6077b279df]</div><div>/nfs/home/zzyang/opt/software/linux-centos7-cascadelake/gcc-9.4.0/petsc-develop-5wrc3y6lyelr3iyrlm3sr2jlh2wxif3k/lib/libpetsc.so.3.019(SNESSolve+0x10df) [0x7f6077b676c6]</div><div>Abort(1) on node 60: Internal error</div><div>Abort(1007265423) on node 65 (rank 65 in comm 240): Fatal error in PMPI_Iprobe: Other MPI error, error stack:</div><div>PMPI_Iprobe(124)..............: MPI_Iprobe(src=MPI_ANY_SOURCE, tag=MPI_ANY_TAG, comm=0xc4000017, flag=0x7fff4d82827c, status=0x7fff4d8284b0) failed</div><div>MPID_Iprobe(244)..............:</div><div>progress_test(100)............:</div><div>MPIDI_OFI_handle_cq_error(949): OFI poll failed (ofi_events.c:951:MPIDI_OFI_handle_cq_error:Input/output error)</div><div>Abort(941205135) on node 32 (rank 32 in comm 240): Fatal error in PMPI_Iprobe: Other MPI error, error stack:</div><div>PMPI_Iprobe(124)..............: MPI_Iprobe(src=MPI_ANY_SOURCE, tag=MPI_ANY_TAG, comm=0xc4000017, flag=0x7fff715ba3fc, status=0x7fff715ba630) failed</div><div>MPID_Iprobe(240)..............:</div><div>MPIDI_iprobe_safe(108)........:</div><div>MPIDI_iprobe_unsafe(35).......:</div><div>MPIDI_OFI_do_iprobe(69).......:</div><div>MPIDI_OFI_handle_cq_error(949): OFI poll failed (ofi_events.c:951:MPIDI_OFI_handle_cq_error:Input/output error)</div><div>Abort(470941839) on node 75 (rank 75 in comm 0): Fatal error in PMPI_Test: Other MPI error, error stack:</div><div>PMPI_Test(188)................: MPI_Test(request=0x7efe31e03014, flag=0x7ffea65d673c, status=0x7ffea65d6760) failed</div><div>MPIR_Test(73).................:</div><div>MPIR_Test_state(33)...........:</div><div>progress_test(100)............:</div><div>MPIDI_OFI_handle_cq_error(949): OFI poll failed (ofi_events.c:951:MPIDI_OFI_handle_cq_error:Input/output error)</div><div>Abort(805946511) on node 31 (rank 31 in comm 256): Fatal error in PMPI_Probe: Other MPI error, error stack:</div><div>PMPI_Probe(118)...............: MPI_Probe(src=MPI_ANY_SOURCE, tag=7, comm=0xc4000015, status=0x7fff9538b7a0) failed</div><div>MPID_Probe(159)...............:</div><div>progress_test(100)............:</div><div>MPIDI_OFI_handle_cq_error(949): OFI poll failed (ofi_events.c:951:MPIDI_OFI_handle_cq_error:Input/output error)</div><div>Abort(1179791) on node 73 (rank 73 in comm 0): Fatal error in PMPI_Test: Other MPI error, error stack:</div><div>PMPI_Test(188)................: MPI_Test(request=0x5b638d4, flag=0x7ffd755119cc, status=0x7ffd755121b0) failed</div><div>MPIR_Test(73).................:</div><div>MPIR_Test_state(33)...........:</div><div>progress_test(100)............:</div><div>MPIDI_OFI_handle_cq_error(949): OFI poll failed (ofi_events.c:951:MPIDI_OFI_handle_cq_error:Input/output error)</div><div>```</div></div><div><br></div><div>Thank you very much for your time and consideration.<br></div><div><br></div><div><div dir="ltr"><div dir="ltr">Best wishes,<div>Zongze</div></div></div></div></div></div></div></div></div>
</blockquote></div><br clear="all"><div><br></div><span>-- </span><br><div dir="ltr"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener</div><div><br></div><div><a href="http://www.cse.buffalo.edu/~knepley/" target="_blank">https://www.cse.buffalo.edu/~knepley/</a><br></div></div></div></div></div></div></div></div>
</blockquote></div></div><span>-- </span><br><div dir="ltr"><div dir="ltr">Best wishes,<div>Zongze</div></div></div>
</blockquote></div><br clear="all"><div><br></div><span>-- </span><br><div dir="ltr">Stefano</div>
</blockquote></div></div></div>
</blockquote></div><br clear="all"><div><br></div><span class="gmail_signature_prefix">-- </span><br><div dir="ltr" class="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener</div><div><br></div><div><a href="http://www.cse.buffalo.edu/~knepley/" target="_blank">https://www.cse.buffalo.edu/~knepley/</a><br></div></div></div></div></div></div></div></div>