[petsc-dev] Valgrind invalid read in GAMG
Matthew Knepley
knepley at gmail.com
Fri Aug 9 09:14:25 CDT 2013
On Fri, Aug 9, 2013 at 8:49 AM, John Mousel <john.mousel at gmail.com> wrote:
> I'm getting the following invalid read when I use GAMG. I've included the
> output of KSPView at the bottom.
>
This seems to be a more basic problem in MatGetSubmatrix(). Could you give
us the matrix? It could be output
using -ksp_view_binary.
Thanks,
Matt
> ==18312== Memcheck, a memory error detector
> ==18312== Copyright (C) 2002-2012, and GNU GPL'd, by Julian Seward et al.
> ==18312== Using Valgrind-3.8.1 and LibVEX; rerun with -h for copyright info
> ==18312== Command: ../src/CYLINDER_EXE -ELAFINT_case_name cylinder
> -lspaint_ksp_type bcgsl -lspaint_pc_type gamg -lspaint_pc_gamg_type agg
> -lspaint_pc_gamg_agg_nsmooths 1 -lspaint_pc_gamg_sym_graph true
> -lspaint_ksp_monitor -pres_ksp_type preonly -pres_pc_type redistribute
> -pres_redistribute_ksp_type bcgsl -pres_redistribute_pc_type gamg
> -pres_redistribute_pc_gamg_threshold 0.05
> -pres_redistribute_mg_levels_ksp_type richardson
> -pres_redistribute_mg_levels_pc_type sor
> -pres_redistribute_mg_coarse_ksp_type richardson
> -pres_redistribute_mg_coarse_pc_type sor
> -pres_redistribute_mg_coarse_pc_sor_its 5 -pres_redistribute_pc_gamg_type
> agg -pres_redistribute_pc_gamg_agg_nsmooths 2
> -pres_redistribute_pc_gamg_sym_graph true
> -pres_redistribute_ksp_initial_guess_nonzero 0 -vel_ksp_monitor
> -pres_redistribute_ksp_monitor
> ==18312== Parent PID: 18308
> ==18312==
> ==18311== Invalid read of size 8
> ==18311== at 0x4DF2BDC: PetscCheckPointer (checkptr.c:52)
> ==18311== by 0x54370DE: MatSetValues_MPIAIJ (mpiaij.c:506)
> ==18311== by 0x5462AA8: MatGetSubMatrix_MPIAIJ_Private (mpiaij.c:3860)
> ==18311== by 0x5461799: MatGetSubMatrix_MPIAIJ (mpiaij.c:3733)
> ==18311== by 0x551B5E8: MatGetSubMatrix (matrix.c:7322)
> ==18311== by 0x5817535: createLevel (gamg.c:404)
> ==18311== by 0x5819420: PCSetUp_GAMG (gamg.c:630)
> ==18311== by 0x5764D93: PCSetUp (precon.c:890)
> ==18311== by 0x589B211: KSPSetUp (itfunc.c:278)
> ==18311== by 0x589C39A: KSPSolve (itfunc.c:399)
> ==18311== by 0x57555E1: kspsolve_ (itfuncf.c:219)
> ==18311== by 0x61B7A6: axbsolve_ (Axb.F90:139)
> ==18311== Address 0x83d9c20 is 9,376 bytes inside a block of size 9,380
> alloc'd
> ==18311== at 0x4A06548: memalign (vg_replace_malloc.c:727)
> ==18311== by 0x4DF2C7D: PetscMallocAlign (mal.c:==18309==
> ==18309== HEAP SUMMARY:
> ==18309== in use at exit: 3,262,243 bytes in 17,854 blocks
> ==18309== total heap usage: 329,856 allocs, 312,002 frees, 136,790,072
> bytes allocated
> ==18309==
> ==18309== LEAK SUMMARY:
> ==18309== definitely lost: 0 bytes in 0 blocks
> ==18309== indirectly lost: 0 bytes in 0 blocks
> ==18309== possibly lost: 0 bytes in 0 blocks
> ==18309== still reachable: 3,262,243 bytes in 17,854 blocks
> ==18309== suppressed: 0 bytes in 0 blocks
> ==18309== Rerun with --leak-check=full to see details of leaked memory
> ==18309==
> ==18309== For counts of detected and suppressed errors, rerun with: -v
> ==18309== Use --track-origins=yes to see where uninitialised values come
> from
> ==18309== ERROR SUMMARY: 24 errors from 1 contexts (suppressed: 6 from 6)
> ,141 frees, 130,288,754 bytes allocated
> ==18310======18310== LEAK SUMMARY:
> ==18310== definitely lost: 0 bytes in 0 blocks
> ==18310== indirectly lost: 0 bytes in 0 blocks
> ==18310== possibly lost: 0 bytes in 0 blocks
> ==18310== still reachable: 3,376,159 bytes in 18,772 blocks
> ==18310== suppressed: 0 bytes in 0 blocks
> ==18310== Rerun with --leak-check=full to see details of leaked memory
> ==18310==
> ==18310== For counts of detected and suppressed errors, rerun with: -v
> ==18310== ERROR SUMMARY: 2 errors from 1 contexts (suppressed: 6 from 6)
>
>
>
> KSP Object:(pres_) 4 MPI processes
> type: preonly
> maximum iterations=20000, initial guess is zero
> tolerances: relative=5e-08, absolute=1e-50, divergence=10000
> left preconditioning
> using NONE norm type for convergence test
> PC Object:(pres_) 4 MPI processes
> type: redistribute
> Number rows eliminated 820 Percentage rows eliminated 11.6213
> Redistribute preconditioner:
> KSP Object: (pres_redistribute_) 4 MPI processes
> type: bcgsl
> BCGSL: Ell = 2
> BCGSL: Delta = 0
> maximum iterations=20000, initial guess is zero
> tolerances: relative=5e-08, absolute=1e-50, divergence=10000
> left preconditioning
> has attached null space
> using PRECONDITIONED norm type for convergence test
> PC Object: (pres_redistribute_) 4 MPI processes
> type: gamg
> MG: type is MULTIPLICATIVE, levels=3 cycles=v
> Cycles per PCApply=1
> Using Galerkin computed coarse grid matrices
> Coarse grid solver -- level -------------------------------
> KSP Object: (pres_redistribute_mg_coarse_) 4 MPI processes
> type: preonly
> maximum iterations=1, initial guess is zero
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000
> left preconditioning
> using NONE norm type for convergence test
> PC Object: (pres_redistribute_mg_coarse_) 4 MPI processes
> type: sor
> SOR: type = local_symmetric, iterations = 5, local iterations =
> 1, omega = 1
> linear system matrix = precond matrix:
> Mat Object: 4 MPI processes
> type: mpiaij
> rows=61, cols=61
> total: nonzeros=1846, allocated nonzeros=1846
> total number of mallocs used during MatSetValues calls =0
> not using I-node (on process 0) routines
> Down solver (pre-smoother) on level 1 -------------------------------
> KSP Object: (pres_redistribute_mg_levels_1_) 4 MPI
> processes
> type: richardson
> Richardson: damping factor=1
> maximum iterations=2
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000
> left preconditioning
> using nonzero initial guess
> using NONE norm type for convergence test
> PC Object: (pres_redistribute_mg_levels_1_) 4 MPI
> processes
> type: sor
> SOR: type = local_symmetric, iterations = 1, local iterations =
> 1, omega = 1
> linear system matrix = precond matrix:
> Mat Object: 4 MPI processes
> type: mpiaij
> rows=870, cols=870
> total: nonzeros=16020, allocated nonzeros=16020
> total number of mallocs used during MatSetValues calls =0
> not using I-node (on process 0) routines
> Up solver (post-smoother) same as down solver (pre-smoother)
> Down solver (pre-smoother) on level 2 -------------------------------
> KSP Object: (pres_redistribute_mg_levels_2_) 4 MPI
> processes
> type: richardson
> Richardson: damping factor=1
> maximum iterations=2
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000
> left preconditioning
> using nonzero initial guess
> using NONE norm type for convergence test
> PC Object: (pres_redistribute_mg_levels_2_) 4 MPI
> processes
> type: sor
> SOR: type = local_symmetric, iterations = 1, local iterations =
> 1, omega = 1
> linear system matrix = precond matrix:
> Mat Object: 4 MPI processes
> type: mpiaij
> rows=6236, cols=6236
> total: nonzeros=31220, allocated nonzeros=31220
> total number of mallocs used during MatSetValues calls =0
> not using I-node (on process 0) routines
> Up solver (post-smoother) same as down solver (pre-smoother)
> linear system matrix = precond matrix:
> Mat Object: 4 MPI processes
> type: mpiaij
> rows=6236, cols=6236
> total: nonzeros=31220, allocated nonzeros=31220
> total number of mallocs used during MatSetValues calls =0
> not using I-node (on process 0) routines
> linear system matrix = precond matrix:
> Mat Object: 4 MPI processes
> type: mpiaij
> rows=7056, cols=7056
> total: nonzeros=32416, allocated nonzeros=33180
> total number of mallocs used during MatSetValues calls =0
> not using I-node (on process 0) routines
>
>
>
--
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20130809/c2add389/attachment.html>
More information about the petsc-dev
mailing list