<html><head><meta http-equiv="Content-Type" content="text/html; charset=us-ascii"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" class=""><div><br class=""><blockquote type="cite" class=""><div class="">On 12 Aug 2022, at 9:47 PM, Alfredo Jaramillo <<a href="mailto:ajaramillopalma@gmail.com" class="">ajaramillopalma@gmail.com</a>> wrote:</div><br class="Apple-interchange-newline"><div class=""><div dir="ltr" class=""><div class="">Hello Mark,</div><div class="">But why should this depend on the number of processes?</div></div></div></blockquote><div><br class=""></div><div>Because with non-binary formats, the matrix is centralized on the first process, which can become very costly.</div><div><br class=""></div><div>Thanks,</div><div>Pierre</div><br class=""><blockquote type="cite" class=""><div class=""><div dir="ltr" class=""><div class="">thanks</div><div class="">Alfredo<br class=""></div></div><br class=""><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Fri, Aug 12, 2022 at 1:42 PM Mark Adams <<a href="mailto:mfadams@lbl.gov" class="">mfadams@lbl.gov</a>> wrote:<br class=""></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr" class="">With 4 million elements you are nowhere near the 32 but integer limit of 2B or 32Gb of memory.<div class=""><br class=""></div><div class="">See the <a href="https://petsc.org/main/docs/manualpages/Mat/MatView" target="_blank" class="">https://petsc.org/main/docs/manualpages/Mat/MatView</a></div><div class="">You should go to binary format when doing large matrices.</div><div class=""><br class=""></div><div class="">Mark</div></div><br class=""><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Fri, Aug 12, 2022 at 1:00 PM Alfredo Jaramillo <<a href="mailto:ajaramillopalma@gmail.com" target="_blank" class="">ajaramillopalma@gmail.com</a>> wrote:<br class=""></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr" class=""><div class="">Hello Mark,</div><div class="">Thank you, I added the lines that you sent.<br class=""></div><div class="">This only happens when running the code with more than 1 process. With only 1 MPI process the matrix is printed out.</div><div class="">With 2 processes or more I observed the program begins to allocate RAM until it exceeds the computer capacity (32GB) so I wasn't able to get the stack trace.</div><div class=""><br class=""></div><div class="">However, I was able to reproduce the problem by compiling <a href="https://petsc.org/release/src/ksp/ksp/tutorials/ex54.c.html" target="_blank" class="">src/ksp/ksp/tutorials/ex54.c.html</a> (modifying line 144) and running it with</div><div class=""><br class=""></div><div class="">mpirun -np 2 ex54 -ne 1000</div><div class=""><br class=""></div><div class="">This gives a sparse matrix of order ~1 million. When running ex54 with only one MPI process I don't observe this excessive allocation and the matrix is printed out.</div><div class=""><br class=""></div><div class="">Thanks,<br class=""></div><div class="">Alfredo<br class=""></div><div class=""><br class=""></div><div class=""><br class=""></div></div><br class=""><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Fri, Aug 12, 2022 at 10:02 AM Mark Adams <<a href="mailto:mfadams@lbl.gov" target="_blank" class="">mfadams@lbl.gov</a>> wrote:<br class=""></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr" class="">You also want:<br class=""><br class=""> PetscCall(PetscViewerPopFormat(viewer));<br class=""> PetscCall(PetscViewerDestroy(&viewer));<br class=""><br class="">This should not be a problem.<div class="">If this is a segv and you configure it with '--with-debugging=1', you should get a stack trace, which would help immensely.</div><div class="">Or run in a debugger to get a stack trace.</div><div class=""><br class=""></div><div class="">Thanks,</div><div class="">Mark<br class=""><div class=""><pre lang="c" class=""><span lang="c" class=""></span></pre><pre lang="c" class=""><span lang="c" class=""><br class=""></span></pre></div></div></div><br class=""><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Fri, Aug 12, 2022 at 11:26 AM Alfredo Jaramillo <<a href="mailto:ajaramillopalma@gmail.com" target="_blank" class="">ajaramillopalma@gmail.com</a>> wrote:<br class=""></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr" class=""><div class="">Dear developers,</div><div class=""><br class=""></div><div class="">I'm writing a sparse matrix into a file by doing <br class=""></div><div class=""><br class=""></div><div class=""> if (dump_mat) {<br class=""> PetscViewer viewer;<br class=""> PetscViewerASCIIOpen(PETSC_COMM_WORLD,"mat-par-aux.m",&viewer);<br class=""> PetscViewerPushFormat(viewer, PETSC_VIEWER_ASCII_MATLAB);<br class=""> MatView(A,viewer);<br class=""> }</div><div class=""><br class=""></div><div class="">This works perfectly for small cases.<br class=""></div><div class="">The program crashes for a case where the matrix A is of order 1 million but with only 4 million non-zero elements.</div><div class=""><br class=""></div><div class="">Maybe at some point petsc is full-sizing A?</div><div class=""><br class=""></div><div class="">Thank you,</div><div class="">Alfredo<br class=""></div></div>
</blockquote></div>
</blockquote></div>
</blockquote></div>
</blockquote></div>
</div></blockquote></div><br class=""></body></html>