<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">On Mon, Nov 30, 2015 at 5:20 PM, Andrey Ovsyannikov <span dir="ltr"><<a href="mailto:aovsyannikov@lbl.gov" target="_blank">aovsyannikov@lbl.gov</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><div>Dear PETSc team,<br><br></div><div>I am working on optimization of Chombo-Crunch CFD code for next-generation supercomputer architectures at NERSC (Berkeley Lab) and we use PETSc AMG solver. During memory analysis study I faced with a difficulty to get memory usage data from PETSc for all MPI ranks. I am looking for memory dump function to get a detailed information on memory usage (not only resident size and virtual memory but allso allocation by Vec, Mat, etc). There is PetscMallocDumpLog() function but it is a collective function and it always provides a log for 0 rank. I am wondering if it is possible to include in PETSc a modification of PetscMallocDumpLog() which dumps the similar log but for all MPI ranks.<br><br></div><div>I am attaching an example of my own memory function which uses PETSc non-collective functions and it provides a resident set size and virtual memory for all ranks. Perhaps in a similar way it is possible to modify PetscMallocDumpLog.<br></div></div></div></blockquote><div><br></div><div>You could walk the heap if you use the debugging malloc infrastructure in PETSc. However, I would really recommend</div><div>trying out Massif from the valgrind toolset. Its designed for this and really nice.</div><div><br></div><div> Thanks,</div><div><br></div><div> Matt</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><div></div><div>Thank you,<br></div><div><br clear="all">void petscMemoryLog(const char prefix[])<br>{<br> FILE* fd;<br> char fname[PETSC_MAX_PATH_LEN];<br> PetscMPIInt rank;<br><br> MPI_Comm_rank(Chombo_MPI::comm,&rank);<br><br> PetscLogDouble allocated;<br> PetscLogDouble resident;<br> PetscMallocGetCurrentUsage(&allocated);<br> PetscMemoryGetCurrentUsage(&resident);<br> PetscSNPrintf(fname,sizeof(fname),"%s.%d",prefix,rank);<br> PetscFOpen(PETSC_COMM_SELF,fname,"a",&fd);<br><br> PetscFPrintf(PETSC_COMM_SELF,fd,"### PETSc memory footprint for rank %d \n",rank);<br> PetscFPrintf(PETSC_COMM_SELF,fd,"[%d] Memory allocated by PetscMalloc() %.0f bytes\n",rank,allocated);<br> PetscFPrintf(PETSC_COMM_SELF,fd,"[%d] RSS usage by entire process %.0f KB\n",rank,resident);<br> PetscFClose(PETSC_COMM_SELF,fd);<br>}<br></div><br></div>Best regards,<br><div><div><div><div dir="ltr"><div><div dir="ltr"><div>Andrey Ovsyannikov, Ph.D.</div>Postdoctoral Fellow<br><div>NERSC Division</div><div>Lawrence Berkeley National Laboratory</div><div><a href="tel:510-486-7880" value="+15104867880" target="_blank">510-486-7880</a></div><div><a href="mailto:aovsyannikov@lbl.gov" target="_blank">aovsyannikov@lbl.gov</a></div></div></div></div></div>
</div></div></div>
</blockquote></div><br><br clear="all"><div><br></div>-- <br><div class="gmail_signature">What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener</div>
</div></div>