[petsc-users] Memory Usage in Matrix Assembly.

Pantelis Moschopoulos pmoschopoulos at outlook.com
Wed Mar 15 01:34:58 CDT 2023


Dear all,
Thank you all very much for your suggestions.

Dave, I am using also the reverse Cuthill–McKee algorithm when I load the mesh information and then the simulation proceeds. I can use partitioning after the reordering right?

Matt, with PLEX you refer to DMPLEX? To be honest, I have never tried the DM structures of Petsc up to this point.

Pantelis
________________________________
From: Matthew Knepley <knepley at gmail.com>
Sent: Tuesday, March 14, 2023 10:55 PM
To: Dave May <dave.mayhem23 at gmail.com>
Cc: Pantelis Moschopoulos <pmoschopoulos at outlook.com>; petsc-users at mcs.anl.gov <petsc-users at mcs.anl.gov>
Subject: Re: [petsc-users] Memory Usage in Matrix Assembly.

On Tue, Mar 14, 2023 at 12:01 PM Dave May <dave.mayhem23 at gmail.com<mailto:dave.mayhem23 at gmail.com>> wrote:


On Tue, 14 Mar 2023 at 07:59, Pantelis Moschopoulos <pmoschopoulos at outlook.com<mailto:pmoschopoulos at outlook.com>> wrote:
Dear Dave,

Yes, I observe this in parallel runs. How I can change the parallel layout of the matrix? In my implementation, I read the mesh file, and the I split the domain where the first rank gets the first N elements, the second rank gets the next N elements etc. Should I use metis to distribute elements?

Note that I use continuous finite elements, which means that some values will be cached in a temporary buffer.

Sure. With CG FE you will always have some DOFs which need to be cached, however the number of cached values will be minimized if you follow Barry's advice. If you do what Barry suggests, only the DOFs which live on the boundary of your element-wise defined sub-domains would need to cached.

Note that we have direct support for unstructured meshes (Plex) with partitioning and redistribution, rather than translating them to purely algebraic language.

  Thanks,

     Matt

Thanks,
Dave


Thank you very much,
Pantelis
________________________________
From: Dave May <dave.mayhem23 at gmail.com<mailto:dave.mayhem23 at gmail.com>>
Sent: Tuesday, March 14, 2023 4:40 PM
To: Pantelis Moschopoulos <pmoschopoulos at outlook.com<mailto:pmoschopoulos at outlook.com>>
Cc: petsc-users at mcs.anl.gov<mailto:petsc-users at mcs.anl.gov> <petsc-users at mcs.anl.gov<mailto:petsc-users at mcs.anl.gov>>
Subject: Re: [petsc-users] Memory Usage in Matrix Assembly.



On Tue 14. Mar 2023 at 07:15, Pantelis Moschopoulos <pmoschopoulos at outlook.com<mailto:pmoschopoulos at outlook.com>> wrote:
Hi everyone,

I am a new Petsc user that incorporates Petsc for FEM in a Fortran code.
My question concerns the sudden increase of the memory that Petsc needs during the assembly of the jacobian matrix. After this point, memory is freed. It seems to me like Petsc performs memory allocations and the deallocations during assembly.
I have used the following commands with no success:
CALL MatSetOption(petsc_A, MAT_NEW_NONZERO_LOCATIONS,PETSC_FALSE,ier)
CALL MatSetOption(petsc_A, MAT_NEW_NONZERO_LOCATION_ERR,PETSC_TRUE,ier)
CALL MatSetOption(petsc_A, MAT_NEW_NONZERO_ALLOCATION_ERR, PETSC_TRUE,ier).
CALL MatSetOption(petsc_A, MAT_KEEP_NONZERO_PATTERN,PETSC_TRUE,ier)

The structure of the matrix does not change during my simulation, just the values. I am expecting this behavior the first time that I create this matrix because the preallocation instructions that I use are not very accurate but this continues every time I assemble the matrix.
What I am missing here?

I am guessing this observation is seen when you run a parallel job.

MatSetValues() will cache values in a temporary memory buffer if the values are to be sent to a different MPI rank.
Hence if the parallel layout of your matrix doesn’t closely match the layout of the DOFs on each mesh sub-domain, then a huge number of values can potentially be cached. After you call MatAssemblyBegin(), MatAssemblyEnd() this cache will be freed.

Thanks,
Dave



Thank you very much,
Pantelis


--
What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/<http://www.cse.buffalo.edu/~knepley/>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20230315/b8ebf191/attachment.html>


More information about the petsc-users mailing list