[petsc-users] PETSc read binary matrix row by row

Stefano Zampini stefano.zampini at gmail.com
Thu May 22 11:54:33 CDT 2025


How big are these matrices?

The memory PETSc allocates is (8+4)*num_nnz + 4*num_rows bytes (in double
precision)
https://urldefense.us/v3/__https://gitlab.com/petsc/petsc/-/blob/main/src/mat/impls/aij/seq/aij.h*L47__;Iw!!G_uCfscf7eWS!eQLvMSRayHm9_oWVvaVFu0KZKnXMgFnQ06zPI0ciy6V4RLioHdXqliTOEFJQs8W8qb-y8v2-wVe3mKbGwBDucIVgolkrDHg$ 
Assuming num_nnz >> num_rows, 2.8 billion nonzeros fit 32GB of RAM.

Anyway, if you say you use the same job for matrix write + load +
averaging, then you are better off allocating a single matrix and perform
averaging in the C++ code directly

allocate_and_zero_a single matrix
for i in range(number_of_matrices)
   add_values_to_the_matrix
scale_matrix_for_average



Il giorno gio 22 mag 2025 alle ore 18:55 superdduck88 at gmail.com <
superdduck88 at gmail.com> ha scritto:

> Hi
>
> The averaging takes place on the same hardware and as part of the same job
> as the writing the matrix to file in the c++ code. The number of nodes is
> determined on the problem size and therefor for a single matrix. Allocating
> extra nodes for the averaging is not economical, unfortunately.
>
> On 22 May 2025, at 16:52, Junchao Zhang <junchao.zhang at gmail.com> wrote:
>
> 
> Did you run in MPI parallel?   If not, using MPI and running with multiple
> compute nodes could solve the problem.
>
> Are all these matrices already on disk?  Then you have to pay the I/O cost
> for reading the matrices.
>
> --Junchao Zhang
>
>
> On Thu, May 22, 2025 at 8:21 AM Donald Duck <superdduck88 at gmail.com>
> wrote:
>
>> Hello everyone
>>
>> A piece of c++ code writes PETSc AIJ matrices to binary files. My task is
>> to compute an average matrix of these AIJ matrices. Therefore I read the
>> first matrix with petsc4py and then start to add the other matrices to it.
>> All matrixes always have the same size, shape, nnz etc.
>>
>> However in some cases these matrices are large and only one of it fits
>> into memory and the reading/writing takes a significant amout of time. Is
>> there a way to read it row by row to prevent memory overflow?
>>
>> I'm also open for other suggestions, thanks in advance!
>>
>> Raphael
>>
>

-- 
Stefano
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20250522/78412418/attachment.html>


More information about the petsc-users mailing list