[petsc-users] Memory Usage in Matrix Assembly.

Dave May dave.mayhem23 at gmail.com
Tue Mar 14 11:00:39 CDT 2023

On Tue, 14 Mar 2023 at 07:59, Pantelis Moschopoulos <
pmoschopoulos at outlook.com> wrote:

> Dear Dave,
> Yes, I observe this in parallel runs. How I can change the parallel layout
> of the matrix? In my implementation, I read the mesh file, and the I split
> the domain where the first rank gets the first N elements, the second rank
> gets the next N elements etc. Should I use metis to distribute elements?

> Note that I use continuous finite elements, which means that some values
> will be cached in a temporary buffer.

Sure. With CG FE you will always have some DOFs which need to be cached,
however the number of cached values will be minimized if you follow Barry's
advice. If you do what Barry suggests, only the DOFs which live on the
boundary of your element-wise defined sub-domains would need to cached.


> Thank you very much,
> Pantelis
> ------------------------------
> *From:* Dave May <dave.mayhem23 at gmail.com>
> *Sent:* Tuesday, March 14, 2023 4:40 PM
> *To:* Pantelis Moschopoulos <pmoschopoulos at outlook.com>
> *Cc:* petsc-users at mcs.anl.gov <petsc-users at mcs.anl.gov>
> *Subject:* Re: [petsc-users] Memory Usage in Matrix Assembly.
> On Tue 14. Mar 2023 at 07:15, Pantelis Moschopoulos <
> pmoschopoulos at outlook.com> wrote:
> Hi everyone,
> I am a new Petsc user that incorporates Petsc for FEM in a Fortran code.
> My question concerns the sudden increase of the memory that Petsc needs
> during the assembly of the jacobian matrix. After this point, memory is
> freed. It seems to me like Petsc performs memory allocations and the
> deallocations during assembly.
> I have used the following commands with no success:
> The structure of the matrix does not change during my simulation, just the
> values. I am expecting this behavior the first time that I create this
> matrix because the preallocation instructions that I use are not very
> accurate but this continues every time I assemble the matrix.
> What I am missing here?
> I am guessing this observation is seen when you run a parallel job.
> MatSetValues() will cache values in a temporary memory buffer if the
> values are to be sent to a different MPI rank.
> Hence if the parallel layout of your matrix doesn’t closely match the
> layout of the DOFs on each mesh sub-domain, then a huge number of values
> can potentially be cached. After you call MatAssemblyBegin(),
> MatAssemblyEnd() this cache will be freed.
> Thanks,
> Dave
> Thank you very much,
> Pantelis
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20230314/2095cdb4/attachment.html>

More information about the petsc-users mailing list