[petsc-users] Memory Usage in Matrix Assembly.

Pantelis Moschopoulos pmoschopoulos at outlook.com
Tue Mar 14 10:32:35 CDT 2023


Ok, I will try to implement your suggestions.

Thank you very much for your help,
Pantelis
________________________________
From: Barry Smith <bsmith at petsc.dev>
Sent: Tuesday, March 14, 2023 5:21 PM
To: Pantelis Moschopoulos <pmoschopoulos at outlook.com>
Cc: Dave May <dave.mayhem23 at gmail.com>; petsc-users at mcs.anl.gov <petsc-users at mcs.anl.gov>
Subject: Re: [petsc-users] Memory Usage in Matrix Assembly.


  Yes, you should partition the elements and redistribute them for optimal parallelism.

  You can use the MatPartitioning object to partition the graph of the elements which will tell you what elements should be assigned to each MPI process. But then you need to move the element information to the correct process. At that point your code will remain pretty much as it is now.

  Barry


On Mar 14, 2023, at 10:59 AM, Pantelis Moschopoulos <pmoschopoulos at outlook.com> wrote:

Dear Dave,

Yes, I observe this in parallel runs. How I can change the parallel layout of the matrix? In my implementation, I read the mesh file, and the I split the domain where the first rank gets the first N elements, the second rank gets the next N elements etc. Should I use metis to distribute elements? Note that I use continuous finite elements, which means that some values will be cached in a temporary buffer.

Thank you very much,
Pantelis
________________________________
From: Dave May <dave.mayhem23 at gmail.com>
Sent: Tuesday, March 14, 2023 4:40 PM
To: Pantelis Moschopoulos <pmoschopoulos at outlook.com>
Cc: petsc-users at mcs.anl.gov <petsc-users at mcs.anl.gov>
Subject: Re: [petsc-users] Memory Usage in Matrix Assembly.



On Tue 14. Mar 2023 at 07:15, Pantelis Moschopoulos <pmoschopoulos at outlook.com<mailto:pmoschopoulos at outlook.com>> wrote:
Hi everyone,

I am a new Petsc user that incorporates Petsc for FEM in a Fortran code.
My question concerns the sudden increase of the memory that Petsc needs during the assembly of the jacobian matrix. After this point, memory is freed. It seems to me like Petsc performs memory allocations and the deallocations during assembly.
I have used the following commands with no success:
CALL MatSetOption(petsc_A, MAT_NEW_NONZERO_LOCATIONS,PETSC_FALSE,ier)
CALL MatSetOption(petsc_A, MAT_NEW_NONZERO_LOCATION_ERR,PETSC_TRUE,ier)
CALL MatSetOption(petsc_A, MAT_NEW_NONZERO_ALLOCATION_ERR, PETSC_TRUE,ier).
CALL MatSetOption(petsc_A, MAT_KEEP_NONZERO_PATTERN,PETSC_TRUE,ier)

The structure of the matrix does not change during my simulation, just the values. I am expecting this behavior the first time that I create this matrix because the preallocation instructions that I use are not very accurate but this continues every time I assemble the matrix.
What I am missing here?

I am guessing this observation is seen when you run a parallel job.

MatSetValues() will cache values in a temporary memory buffer if the values are to be sent to a different MPI rank.
Hence if the parallel layout of your matrix doesn’t closely match the layout of the DOFs on each mesh sub-domain, then a huge number of values can potentially be cached. After you call MatAssemblyBegin(), MatAssemblyEnd() this cache will be freed.

Thanks,
Dave



Thank you very much,
Pantelis

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20230314/47b60c58/attachment-0001.html>


More information about the petsc-users mailing list