[petsc-users] Memory usage scaling with number of processors

Matthew Knepley knepley at gmail.com
Wed Jul 24 20:26:26 CDT 2024


On Wed, Jul 24, 2024 at 8:37 PM Matthew Thomas <matthew.thomas1 at anu.edu.au>
wrote:

> Hello Matt,
>
> Thanks for the help. I believe the problem is coming from an incorrect
> linking with MPI and PETSc.
>
> I tried running with petscmpiexec from
> $PETSC_DIR/lib/petsc/bin/petscmpiexec. This gave me the error
>
> Error build location not found! Please set PETSC_DIR and PETSC_ARCH
> correctly for this build.
>
>
> Naturally I have set these two values and echo $PETSC_DIR gives the path I
> expect, so it seems like I am running my programs with a different version
> of MPI than petsc expects which could explain the memory usage.
>
> Do you have any ideas how to fix this?
>

Yes. First we determine what MPI you configured with. Send configure.log,
which has this information.

  Thanks,

      Matt


> Thanks,
> Matt
>
> On 24 Jul 2024, at 8:41 PM, Matthew Knepley <knepley at gmail.com> wrote:
>
> You don't often get email from knepley at gmail.com. Learn why this is
> important <https://urldefense.us/v3/__https://aka.ms/LearnAboutSenderIdentification__;!!G_uCfscf7eWS!YsrpkGTifXON5FvrYzu4-3O6KuJWEiDvzSfIcxkjEH8QL1hE8VRmza2im3Pq0WtxMtakhzbXzzqBKGiqaJw1$ >
> On Tue, Jul 23, 2024 at 8:02 PM Matthew Thomas <matthew.thomas1 at anu.edu.au>
> wrote:
>
>> Hello Matt,
>>
>> I have attached the output with mat_view for 8 and 40 processors.
>>
>> I am unsure what is meant by the matrix communicator and the
>> partitioning. I am using the default behaviour in every case. How can I
>> find this information?
>>
>
> This shows that the matrix is taking the same amount of memory for 8 and
> 40 procs, so that is not your problem. Also,
> it is a very small amount of memory:
>
>   100K rows x 3 nz/row x 8 bytes/nz = 2.4 MB
>
> and 50% overhead for indexing, so something under 4MB. I am not sure what
> is taking up the rest of the memory, but I do not
> think it is PETSc from the log you included.
>
>   Thanks,
>
>      Matt
>
>
>> I have attached the log view as well if that helps.
>>
>> Thanks,
>> Matt
>>
>>
>>
>>
>> On 23 Jul 2024, at 9:24 PM, Matthew Knepley <knepley at gmail.com> wrote:
>>
>> You don't often get email from knepley at gmail.com. Learn why this is
>> important <https://urldefense.us/v3/__https://aka.ms/LearnAboutSenderIdentification__;!!G_uCfscf7eWS!YsrpkGTifXON5FvrYzu4-3O6KuJWEiDvzSfIcxkjEH8QL1hE8VRmza2im3Pq0WtxMtakhzbXzzqBKGiqaJw1$ >
>> Also, you could run with
>>
>>   -mat_view ::ascii_info_detail
>>
>> and send the output for both cases. The storage of matrix values is not
>> redundant, so something else is
>> going on. First, what communicator do you use for the matrix, and what
>> partitioning?
>>
>>   Thanks,
>>
>>      Matt
>>
>> On Mon, Jul 22, 2024 at 10:27 PM Barry Smith <bsmith at petsc.dev> wrote:
>>
>> This Message Is From an External Sender
>> This message came from outside your organization.
>>
>>
>>   Send the code.
>>
>> On Jul 22, 2024, at 9:18 PM, Matthew Thomas via petsc-users <
>> petsc-users at mcs.anl.gov> wrote:
>>
>> This Message Is From an External Sender
>> This message came from outside your organization.
>>
>> Hello,
>>
>> I am using petsc and slepc to solve an eigenvalue problem for sparse matrices. When I run my code with double the number of processors, the memory usage also doubles.
>>
>> I am able to reproduce this behaviour with ex1 of slepc’s hands on exercises.
>>
>> The issue is occurring with petsc not with slepc as this still occurs when I remove the solve step and just create and assemble the petsc matrix.
>>
>> With n=100000, this uses ~1Gb with 8 processors, but ~5Gb with 40 processors.
>>
>> This was done with petsc 3.21.3, on linux compiled with Intel using Intel-MPI
>>
>> Is this the expected behaviour? If not, how can I bug fix this?
>>
>>
>> Thanks,
>> Matt
>>
>>
>>
>>
>> --
>> What most experimenters take for granted before they begin their
>> experiments is infinitely more interesting than any results to which their
>> experiments lead.
>> -- Norbert Wiener
>>
>> https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!YsrpkGTifXON5FvrYzu4-3O6KuJWEiDvzSfIcxkjEH8QL1hE8VRmza2im3Pq0WtxMtakhzbXzzqBKNCMX0GP$ 
>> <https://urldefense.us/v3/__http://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!YsrpkGTifXON5FvrYzu4-3O6KuJWEiDvzSfIcxkjEH8QL1hE8VRmza2im3Pq0WtxMtakhzbXzzqBKBFOwa7T$ >
>>
>>
>>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
> https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!YsrpkGTifXON5FvrYzu4-3O6KuJWEiDvzSfIcxkjEH8QL1hE8VRmza2im3Pq0WtxMtakhzbXzzqBKNCMX0GP$ 
> <https://urldefense.us/v3/__http://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!YsrpkGTifXON5FvrYzu4-3O6KuJWEiDvzSfIcxkjEH8QL1hE8VRmza2im3Pq0WtxMtakhzbXzzqBKBFOwa7T$ >
>
>
>

-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!YsrpkGTifXON5FvrYzu4-3O6KuJWEiDvzSfIcxkjEH8QL1hE8VRmza2im3Pq0WtxMtakhzbXzzqBKNCMX0GP$  <https://urldefense.us/v3/__http://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!YsrpkGTifXON5FvrYzu4-3O6KuJWEiDvzSfIcxkjEH8QL1hE8VRmza2im3Pq0WtxMtakhzbXzzqBKBFOwa7T$ >
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20240724/756d3b28/attachment-0001.html>


More information about the petsc-users mailing list