[petsc-users] Debugging a petsc code

Satish Balay balay at mcs.anl.gov
Mon Jan 30 22:03:10 CST 2017


This is memory leak in openmpi - you can ignore it.

For a valgrind clean MPI - you can build PETSc with --download-mpich

Satish

On Mon, 30 Jan 2017, Praveen C wrote:

> -malloc_test does not report anything.
> 
> Freeing all petsc vectors got rid of those error.
> 
> Now I see only MPI related errors like this
> 
> ==33686== 376 (232 direct, 144 indirect) bytes in 1 blocks are definitely
> lost in loss record 148 of 159
> 
> ==33686==    at 0x4C2B0AF: malloc (in
> /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
> 
> ==33686==    by 0x660D7EF: mca_bml_r2_add_procs (in
> /home/spack/opt/spack/linux-opensuse20161217-x86_64/gcc-6/openmpi-2.0.1-asdjmd22cnyktv2athcx3ouhrozknk22/lib64/libmpi.so.20.0.1)
> 
> ==33686==    by 0x66D11CA: mca_pml_ob1_add_procs (in
> /home/spack/opt/spack/linux-opensuse20161217-x86_64/gcc-6/openmpi-2.0.1-asdjmd22cnyktv2athcx3ouhrozknk22/lib64/libmpi.so.20.0.1)
> 
> ==33686==    by 0x65CE906: ompi_mpi_init (in
> /home/spack/opt/spack/linux-opensuse20161217-x86_64/gcc-6/openmpi-2.0.1-asdjmd22cnyktv2athcx3ouhrozknk22/lib64/libmpi.so.20.0.1)
> 
> ==33686==    by 0x65ED082: PMPI_Init (in
> /home/spack/opt/spack/linux-opensuse20161217-x86_64/gcc-6/openmpi-2.0.1-asdjmd22cnyktv2athcx3ouhrozknk22/lib64/libmpi.so.20.0.1)
> 
> ==33686==    by 0x6352D97: MPI_INIT (in
> /home/spack/opt/spack/linux-opensuse20161217-x86_64/gcc-6/openmpi-2.0.1-asdjmd22cnyktv2athcx3ouhrozknk22/lib64/libmpi_mpifh.so.20.0.0)
> 
> ==33686==    by 0x4F393C6: petscinitialize_ (zstart.c:320)
> 
> ==33686==    by 0x417718: MAIN__ (all.f95:1385)
> 
> ==33686==    by 0x4184B2: main (all.f95:1366)
> 
> 
> Does this indicate some error in my code or my MPI.
> 
> This is the valgrind summary
> 
> ==33686== LEAK SUMMARY:
> 
> ==33686==    definitely lost: 1,378 bytes in 14 blocks
> 
> ==33686==    indirectly lost: 64,882 bytes in 88 blocks
> 
> ==33686==      possibly lost: 0 bytes in 0 blocks
> 
> ==33686==    still reachable: 32,984 bytes in 139 blocks
> 
> ==33686==         suppressed: 0 bytes in 0 blocks
> 
> 
> I have attached the full valgrid output.
> 
> 
> Thanks
> praveen
> 
> On Tue, Jan 31, 2017 at 12:18 AM, Stefano Zampini <stefano.zampini at gmail.com
> > wrote:
> 
> > It just reports that you have a memory leak. Probably you did not call
> > VecDestroy on the Vec created at at initpetsc_ in line 2066 of all.f95.
> >
> > On Jan 30, 2017, at 8:04 PM, Praveen C <cpraveen at gmail.com> wrote:
> >
> > Dear all
> >
> > I am trying to find a possible bug in my fortran petsc code. Running
> > valgrid I see messages like this
> >
> > ==28499== 1,596 (1,512 direct, 84 indirect) bytes in 1 blocks are
> > definitely lost in loss record 174 of 194
> > ==28499==    at 0x4C2D636: memalign (in /usr/lib64/valgrind/vgpreload_
> > memcheck-amd64-linux.so)
> > ==28499==    by 0x4F0F178: PetscMallocAlign (mal.c:28)
> > ==28499==    by 0x4FF7E82: VecCreate (veccreate.c:37)
> > ==28499==    by 0x4FDF198: VecCreateSeqWithArray (bvec2.c:946)
> > ==28499==    by 0x4FE442E: veccreateseqwitharray_ (zbvec2f.c:12)
> > ==28499==    by 0x406921: initpetsc_ (all.f95:2066)
> > ==28499==    by 0x4035B1: run_ (all.f95:2817)
> > ==28499==    by 0x41760C: MAIN__ (all.f95:1383)
> > ==28499==    by 0x417D08: main (all.f95:1330)
> >
> > Does this indicate some bug in my code ?
> >
> > Thanks
> > praveen
> >
> >
> >
> 



More information about the petsc-users mailing list