<div dir="ltr"><div dir="ltr"><div>Did you run in MPI parallel? If not, using MPI and running with multiple compute nodes could solve the problem.</div><div><br></div><div>Are all these matrices already on disk? Then you have to pay the I/O cost for reading the matrices. </div><div><br></div><div>--Junchao Zhang</div><br></div><br><div class="gmail_quote gmail_quote_container"><div dir="ltr" class="gmail_attr">On Thu, May 22, 2025 at 8:21 AM Donald Duck <<a href="mailto:superdduck88@gmail.com">superdduck88@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">Hello everyone<div><br></div><div>A piece of c++ code writes PETSc AIJ matrices to binary files. My task is to compute an average matrix of these AIJ matrices. Therefore I read the first matrix with petsc4py and then start to add the other matrices to it. All matrixes always have the same size, shape, nnz etc.</div><div><br></div><div>However in some cases these matrices are large and only one of it fits into memory and the reading/writing takes a significant amout of time. Is there a way to read it row by row to prevent memory overflow?</div><div><br></div><div>I'm also open for other suggestions, thanks in advance!</div><div><br></div><div>Raphael</div></div>
</blockquote></div></div>