[petsc-users] Memory usage scaling with number of processors

Matthew Thomas matthew.thomas1 at anu.edu.au
Mon Jul 22 22:32:51 CDT 2024


Hi Barry,

The minimal example is shown below.



#include <slepceps.h>


int main(int argc,char **argv)

{

  Mat            A;           /* problem matrix */

  PetscInt       n=100000,i,Istart,Iend;


  PetscFunctionBeginUser;

  PetscCall(SlepcInitialize(&argc,&argv,(char*)0,help));


  PetscCall(PetscOptionsGetInt(NULL,NULL,"-n",&n,NULL));



  PetscCall(MatCreate(PETSC_COMM_WORLD,&A));

  PetscCall(MatSetSizes(A,PETSC_DECIDE,PETSC_DECIDE,n,n));

  PetscCall(MatSetFromOptions(A));


  PetscCall(MatGetOwnershipRange(A,&Istart,&Iend));

  for (i=Istart;i<Iend;i++) {

    if (i>0) PetscCall(MatSetValue(A,i,i-1,-1.0,INSERT_VALUES));

    if (i<n-1) PetscCall(MatSetValue(A,i,i+1,-1.0,INSERT_VALUES));

    PetscCall(MatSetValue(A,i,i,2.0,INSERT_VALUES));

  }

  PetscCall(MatAssemblyBegin(A,MAT_FINAL_ASSEMBLY));

  PetscCall(MatAssemblyEnd(A,MAT_FINAL_ASSEMBLY));


  PetscCall(MatDestroy(&A));

  PetscCall(SlepcFinalize());

  return 0;

}

Thanks,
Matt

On 23 Jul 2024, at 12:27 PM, Barry Smith <bsmith at petsc.dev> wrote:

You don't often get email from bsmith at petsc.dev. Learn why this is important<https://urldefense.us/v3/__https://aka.ms/LearnAboutSenderIdentification__;!!G_uCfscf7eWS!YiS_jc8Csxoe8VnMxxo-a8dDtemkADxTaJLPS4uE3hDU6NBkGn84LcjG1IHg2fKA_BxEjauQeZ6ruAn1r5JEXpc_tlsogKCv_Xc$ >

  Send the code.

On Jul 22, 2024, at 9:18 PM, Matthew Thomas via petsc-users <petsc-users at mcs.anl.gov> wrote:

This Message Is From an External Sender
This message came from outside your organization.

Hello,

I am using petsc and slepc to solve an eigenvalue problem for sparse matrices. When I run my code with double the number of processors, the memory usage also doubles.

I am able to reproduce this behaviour with ex1 of slepc’s hands on exercises.

The issue is occurring with petsc not with slepc as this still occurs when I remove the solve step and just create and assemble the petsc matrix.

With n=100000, this uses ~1Gb with 8 processors, but ~5Gb with 40 processors.

This was done with petsc 3.21.3, on linux compiled with Intel using Intel-MPI

Is this the expected behaviour? If not, how can I bug fix this?


Thanks,
Matt


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20240723/6cb2ece1/attachment-0001.html>


More information about the petsc-users mailing list