[petsc-users] [petsc-maint] Memory usage function: output for all ranks

Richard Mills richardtmills at gmail.com
Mon Nov 30 19:03:50 CST 2015


Andrey,

Maybe this is what you tried, but did you try running only a handful of MPI
ranks (out of your 1000) with Massif?  I've had success doing things that
way.  You won't know what every rank is doing, but you may be able to get a
good idea from your sample.

--Richard

On Mon, Nov 30, 2015 at 3:42 PM, Andrey Ovsyannikov <aovsyannikov at lbl.gov>
wrote:

> Hi Matt,
>
> Thanks for your quick response. I like Massif tool and I have been using
> it recently. However, I was not able to run Valgrind for large jobs. I am
> interested in memory analysis of large scale runs with more than 1000 MPI
> ranks. PetscMemoryGetCurrentUsage() works fine for this puprpose but it
> does not provide details where I allocate memory. Maybe it would beneficial
> for PETSc community to have some tool/function from PETSc itself.
>
> Anyway, thanks very much for your suggestion!
>
> Andrey
>
> On Mon, Nov 30, 2015 at 3:31 PM, Matthew Knepley <knepley at gmail.com>
> wrote:
>
>> On Mon, Nov 30, 2015 at 5:20 PM, Andrey Ovsyannikov <aovsyannikov at lbl.gov
>> > wrote:
>>
>>> Dear PETSc team,
>>>
>>> I am working on optimization of Chombo-Crunch CFD code for
>>> next-generation supercomputer architectures at NERSC (Berkeley Lab) and we
>>> use PETSc AMG solver. During memory analysis study I faced with a
>>> difficulty to get memory usage data from PETSc for all MPI ranks. I am
>>> looking for memory dump function to get a detailed information on memory
>>> usage (not only resident size and virtual memory but allso allocation by
>>> Vec, Mat, etc). There is PetscMallocDumpLog() function but it is a
>>> collective function and it always provides a log for 0 rank. I am wondering
>>> if it is possible to include in PETSc a modification of
>>> PetscMallocDumpLog() which dumps the similar log but for all MPI ranks.
>>>
>>> I am attaching an example of my own memory function which uses PETSc
>>> non-collective functions and it provides a resident set size and virtual
>>> memory for all ranks. Perhaps in a similar way it is possible to modify
>>> PetscMallocDumpLog.
>>>
>>
>> You could walk the heap if you use the debugging malloc infrastructure in
>> PETSc. However, I would really recommend
>> trying out Massif from the valgrind toolset. Its designed for this and
>> really nice.
>>
>>   Thanks,
>>
>>     Matt
>>
>>
>>> Thank you,
>>>
>>> void petscMemoryLog(const char prefix[])
>>> {
>>>   FILE* fd;
>>>   char fname[PETSC_MAX_PATH_LEN];
>>>   PetscMPIInt rank;
>>>
>>>   MPI_Comm_rank(Chombo_MPI::comm,&rank);
>>>
>>>   PetscLogDouble allocated;
>>>   PetscLogDouble resident;
>>>   PetscMallocGetCurrentUsage(&allocated);
>>>   PetscMemoryGetCurrentUsage(&resident);
>>>   PetscSNPrintf(fname,sizeof(fname),"%s.%d",prefix,rank);
>>>   PetscFOpen(PETSC_COMM_SELF,fname,"a",&fd);
>>>
>>>   PetscFPrintf(PETSC_COMM_SELF,fd,"### PETSc memory footprint for rank
>>> %d \n",rank);
>>>   PetscFPrintf(PETSC_COMM_SELF,fd,"[%d] Memory allocated by
>>> PetscMalloc() %.0f bytes\n",rank,allocated);
>>>   PetscFPrintf(PETSC_COMM_SELF,fd,"[%d] RSS usage by entire process %.0f
>>> KB\n",rank,resident);
>>>   PetscFClose(PETSC_COMM_SELF,fd);
>>> }
>>>
>>> Best regards,
>>> Andrey Ovsyannikov, Ph.D.
>>> Postdoctoral Fellow
>>> NERSC Division
>>> Lawrence Berkeley National Laboratory
>>> 510-486-7880
>>> aovsyannikov at lbl.gov
>>>
>>
>>
>>
>> --
>> What most experimenters take for granted before they begin their
>> experiments is infinitely more interesting than any results to which their
>> experiments lead.
>> -- Norbert Wiener
>>
>
>
>
> --
> Andrey Ovsyannikov, Ph.D.
> Postdoctoral Fellow
> NERSC Division
> Lawrence Berkeley National Laboratory
> 510-486-7880
> aovsyannikov at lbl.gov
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20151130/87ac913c/attachment.html>


More information about the petsc-users mailing list