<div dir="ltr">Actually, I was using TR with MUMPS before I tried GAMG and it was working pretty well. The reason I want to switch to GAMG is that I have to increase my system size, and the simulation just takes too long with MUMPS and doesn't scale well.</div><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Sep 14, 2016 at 3:09 PM, Harshad Sahasrabudhe <span dir="ltr"><<a href="mailto:hsahasra@purdue.edu" target="_blank">hsahasra@purdue.edu</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Hi Barry,<div><br></div><div>Sorry, I had no idea that Newton TR would have anything to do with the linear solver. I was using TR because it is trustworthy and converges every time. I tried LS just now with GAMG+GMRES and it converges, so I don't have the CONVERGED_STEP_LENGTH problem anymore.</div><div><br></div><div>Thanks for you help!</div><span class="HOEnZb"><font color="#888888"><div><br></div><div>Harshad</div></font></span></div><div class="HOEnZb"><div class="h5"><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Sep 14, 2016 at 2:51 PM, Barry Smith <span dir="ltr"><<a href="mailto:bsmith@mcs.anl.gov" target="_blank">bsmith@mcs.anl.gov</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><br>
Oh, you are using SNESSolve_NEWTONTR !<br>
<br>
Now it all makes sense! The trust region methods impose other conditions on the linear solution so it needs to have their own special convergence test. In particular it requires that the solution from the linear solve be inside the trust region (i.e. not too big)<br>
<br>
So, any particular reason you use SNESSolve_NEWTONTR instead of the default line search NEWTONLS? In general unless you have a good reason I recommend just using the NEWTONLS.<br>
<br>
If you really want to use the TR then the "early" return from the linear solve is expected, it is controlled by the trust region size and is not under user control.<br>
<br>
I'm sorry I was so brain dead and did not realize you had be using TR or we could have resolved this much sooner.<br>
<br>
Barry<br>
<span><br>
<br>
<br>
> On Sep 14, 2016, at 1:31 PM, Harshad Sahasrabudhe <<a href="mailto:hsahasra@purdue.edu" target="_blank">hsahasra@purdue.edu</a>> wrote:<br>
><br>
> Thanks. I now put a watchpoint on<br>
><br>
> watch *( (PetscErrorCode (**)(KSP, PetscInt, PetscReal, KSPConvergedReason *, void *)) &(ksp->converged) )<br>
><br>
> The function pointer changes in the first iteration of SNES. It changed at the following place:<br>
><br>
> Old value =<br>
> (PetscErrorCode (*)(KSP, PetscInt, PetscReal, KSPConvergedReason *, void *)) 0x2b54acdd00aa <KSPConvergedDefault(KSP, PetscInt, PetscReal, KSPConvergedReason*, void*)><br>
> New value =<br>
</span>> (PetscErrorCode (*)(KSP, PetscInt, PetscReal, KSPConvergedReason *, void *)) 0x2b54ad436ce8 <.SNES_TR_KSPConverged_Private<wbr>.SNES_TR_KSPConverged_Private.<wbr>SNES_TR_KSPConverged_Private(_<wbr>p_KSP*, int, double, KSPConvergedReason*, void*)><br>
<div><div>> KSPSetConvergenceTest (ksp=0x22bf090, converge=0x2b54ad436ce8 <SNES_TR_KSPConverged_Private(<wbr>_p_KSP*, int, double, KSPConvergedReason*, void*)>, cctx=0x1c8b3e0,<br>
> destroy=0x2b54ad437210 <SNES_TR_KSPConverged_Destroy(<wbr>void*)>)<br>
> at /depot/ncn/apps/conte/conte-gc<wbr>c-petsc35-dbg/libs/petsc/build<wbr>-real/src/ksp/ksp/interface/<wbr>itfunc.c:1768<br>
> 1767 ksp->converged = converge;<br>
><br>
> Here's the backtrace:<br>
><br>
> #0 KSPSetConvergenceTest (ksp=0x22bf090, converge=0x2b54ad436ce8 <SNES_TR_KSPConverged_Private(<wbr>_p_KSP*, int, double, KSPConvergedReason*, void*)>, cctx=0x1c8b3e0,<br>
> destroy=0x2b54ad437210 <SNES_TR_KSPConverged_Destroy(<wbr>void*)>)<br>
> at /depot/ncn/apps/conte/conte-gc<wbr>c-petsc35-dbg/libs/petsc/build<wbr>-real/src/ksp/ksp/interface/<wbr>itfunc.c:1768<br>
> #1 0x00002b54ad43865a in SNESSolve_NEWTONTR (snes=0x1d9e490) at /depot/ncn/apps/conte/conte-gc<wbr>c-petsc35-dbg/libs/petsc/build<wbr>-real/src/snes/impls/tr/tr.c:<wbr>146<br>
> #2 0x00002b54acedab57 in SNESSolve (snes=0x1d9e490, b=0x0, x=0x1923420)<br>
> at /depot/ncn/apps/conte/conte-gc<wbr>c-petsc35-dbg/libs/petsc/build<wbr>-real/src/snes/interface/snes.<wbr>c:3743<br>
> #3 0x00002b54abe8b780 in libMesh::PetscNonlinearSolver<<wbr>double>::solve (this=0x19198c0, jac_in=..., x_in=..., r_in=...) at src/solvers/petsc_nonlinear_so<wbr>lver.C:714<br>
> #4 0x00002b54abefa61d in libMesh::NonlinearImplicitSyst<wbr>em::solve (this=0x1910fe0) at src/systems/nonlinear_implicit<wbr>_system.C:183<br>
> #5 0x00002b54a5dcdceb in NonlinearPoisson::execute_solv<wbr>er (this=0x100c500) at NonlinearPoisson.cpp:1191<br>
> #6 0x00002b54a5da733c in NonlinearPoisson::do_solve (this=0x100c500) at NonlinearPoisson.cpp:948<br>
> #7 0x00002b54a6423785 in Simulation::solve (this=0x100c500) at Simulation.cpp:781<br>
> #8 0x00002b54a634826e in Nemo::run_simulations (this=0x63b020) at Nemo.cpp:1313<br>
> #9 0x0000000000426d0d in main (argc=6, argv=0x7ffcdb910768) at main.cpp:447<br>
><br>
> Thanks,<br>
> Harshad<br>
><br>
><br>
> On Wed, Sep 14, 2016 at 2:07 PM, Barry Smith <<a href="mailto:bsmith@mcs.anl.gov" target="_blank">bsmith@mcs.anl.gov</a>> wrote:<br>
><br>
> Super strange, it should never have switched to the function SNES_TR_KSPConverged_Private<br>
><br>
> Fortunately you can use the same technique to track down where the function pointer changes. Just watch ksp->converged to see when the function pointer gets changed and send back the new stack trace.<br>
><br>
> Barry<br>
><br>
><br>
><br>
><br>
> > On Sep 14, 2016, at 12:39 PM, Harshad Sahasrabudhe <<a href="mailto:hsahasra@purdue.edu" target="_blank">hsahasra@purdue.edu</a>> wrote:<br>
> ><br>
> > Hi Barry,<br>
> ><br>
> > I put a watchpoint on *((KSP_CONVERGED_REASON*) &( ((_p_KSP*)ksp)->reason )) in gdb. The ksp->reason switched between:<br>
> ><br>
> > Old value = KSP_CONVERGED_ITERATING<br>
> > New value = KSP_CONVERGED_RTOL<br>
> > 0x00002b143054bef2 in KSPConvergedDefault (ksp=0x23c3090, n=12, rnorm=5.3617149831259514e-08, reason=0x23c3310, ctx=0x2446210)<br>
> > at /depot/ncn/apps/conte/conte-gc<wbr>c-petsc35-dbg/libs/petsc/build<wbr>-real/src/ksp/ksp/interface/<wbr>iterativ.c:764<br>
> > 764 *reason = KSP_CONVERGED_RTOL;<br>
> ><br>
> > and<br>
> ><br>
> > Old value = KSP_CONVERGED_RTOL<br>
> > New value = KSP_CONVERGED_ITERATING<br>
> > KSPSetUp (ksp=0x23c3090) at /depot/ncn/apps/conte/conte-gc<wbr>c-petsc35-dbg/libs/petsc/build<wbr>-real/src/ksp/ksp/interface/<wbr>itfunc.c:226<br>
> > 226 if (!((PetscObject)ksp)->type_nam<wbr>e) {<br>
> ><br>
> > However, after iteration 6, it changed to KSP_CONVERGED_STEP_LENGTH<br>
> ><br>
> > Old value = KSP_CONVERGED_ITERATING<br>
> > New value = KSP_CONVERGED_STEP_LENGTH<br>
> > SNES_TR_KSPConverged_Private (ksp=0x23c3090, n=1, rnorm=0.097733468578376406, reason=0x23c3310, cctx=0x1d8f3e0)<br>
> > at /depot/ncn/apps/conte/conte-gc<wbr>c-petsc35-dbg/libs/petsc/build<wbr>-real/src/snes/impls/tr/tr.c:<wbr>36<br>
> > 36 PetscFunctionReturn(0);<br>
> ><br>
> > Any ideas why that function was executed? Backtrace when the program stopped here:<br>
> ><br>
> > #0 SNES_TR_KSPConverged_Private (ksp=0x23c3090, n=1, rnorm=0.097733468578376406, reason=0x23c3310, cctx=0x1d8f3e0)<br>
> > at /depot/ncn/apps/conte/conte-gc<wbr>c-petsc35-dbg/libs/petsc/build<wbr>-real/src/snes/impls/tr/tr.c:<wbr>36<br>
> > #1 0x00002b14305d3fda in KSPGMRESCycle (itcount=0x7ffdcf2d4ffc, ksp=0x23c3090)<br>
> > at /depot/ncn/apps/conte/conte-gc<wbr>c-petsc35-dbg/libs/petsc/build<wbr>-real/src/ksp/ksp/impls/gmres/<wbr>gmres.c:182<br>
> > #2 0x00002b14305d4711 in KSPSolve_GMRES (ksp=0x23c3090) at /depot/ncn/apps/conte/conte-gc<wbr>c-petsc35-dbg/libs/petsc/build<wbr>-real/src/ksp/ksp/impls/gmres/<wbr>gmres.c:235<br>
> > #3 0x00002b1430526a8a in KSPSolve (ksp=0x23c3090, b=0x1a916c0, x=0x1d89dc0)<br>
> > at /depot/ncn/apps/conte/conte-gc<wbr>c-petsc35-dbg/libs/petsc/build<wbr>-real/src/ksp/ksp/interface/<wbr>itfunc.c:460<br>
> > #4 0x00002b1430bb3905 in SNESSolve_NEWTONTR (snes=0x1ea2490) at /depot/ncn/apps/conte/conte-gc<wbr>c-petsc35-dbg/libs/petsc/build<wbr>-real/src/snes/impls/tr/tr.c:<wbr>160<br>
> > #5 0x00002b1430655b57 in SNESSolve (snes=0x1ea2490, b=0x0, x=0x1a27420)<br>
> > at /depot/ncn/apps/conte/conte-gc<wbr>c-petsc35-dbg/libs/petsc/build<wbr>-real/src/snes/interface/snes.<wbr>c:3743<br>
> > #6 0x00002b142f606780 in libMesh::PetscNonlinearSolver<<wbr>double>::solve (this=0x1a1d8c0, jac_in=..., x_in=..., r_in=...) at src/solvers/petsc_nonlinear_so<wbr>lver.C:714<br>
> > #7 0x00002b142f67561d in libMesh::NonlinearImplicitSyst<wbr>em::solve (this=0x1a14fe0) at src/systems/nonlinear_implicit<wbr>_system.C:183<br>
> > #8 0x00002b1429548ceb in NonlinearPoisson::execute_solv<wbr>er (this=0x1110500) at NonlinearPoisson.cpp:1191<br>
> > #9 0x00002b142952233c in NonlinearPoisson::do_solve (this=0x1110500) at NonlinearPoisson.cpp:948<br>
> > #10 0x00002b1429b9e785 in Simulation::solve (this=0x1110500) at Simulation.cpp:781<br>
> > #11 0x00002b1429ac326e in Nemo::run_simulations (this=0x63b020) at Nemo.cpp:1313<br>
> > #12 0x0000000000426d0d in main (argc=6, argv=0x7ffdcf2d7908) at main.cpp:447<br>
> ><br>
> ><br>
> > Thanks!<br>
> > Harshad<br>
> ><br>
> > On Wed, Sep 14, 2016 at 10:10 AM, Harshad Sahasrabudhe <<a href="mailto:hsahasra@purdue.edu" target="_blank">hsahasra@purdue.edu</a>> wrote:<br>
> > I think I found the problem. I configured PETSc with COPTFLAGS=-O3. I'll remove that option and try again.<br>
> ><br>
> > Thanks!<br>
> > Harshad<br>
> ><br>
> > On Wed, Sep 14, 2016 at 10:06 AM, Harshad Sahasrabudhe <<a href="mailto:hsahasra@purdue.edu" target="_blank">hsahasra@purdue.edu</a>> wrote:<br>
> > Hi Barry,<br>
> ><br>
> > Thanks for your inputs. I tried to set a watchpoint on ((_p_KSP*)ksp)->reason, but gdb says no symbol _p_KSP in context. Basically, GDB isn't able to find the PETSc source code. I built PETSc with --with-debugging=1 statically and -fPIC, but it seems the libpetsc.a I get doesn't contain debugging symbols (checked using objdump -g). How do I get PETSc library to have debugging info?<br>
> ><br>
> > Thanks,<br>
> > Harshad<br>
> ><br>
> > On Tue, Sep 13, 2016 at 2:47 PM, Barry Smith <<a href="mailto:bsmith@mcs.anl.gov" target="_blank">bsmith@mcs.anl.gov</a>> wrote:<br>
> ><br>
> > > On Sep 13, 2016, at 1:01 PM, Harshad Sahasrabudhe <<a href="mailto:hsahasra@purdue.edu" target="_blank">hsahasra@purdue.edu</a>> wrote:<br>
> > ><br>
> > > Hi Barry,<br>
> > ><br>
> > > I compiled with mpich configured using --enable-g=meminit to get rid of MPI errors in Valgrind. Doing this reduced the number of errors to 2. I have attached the Valgrind output.<br>
> ><br>
> > This isn't helpful but it seems not to be a memory corruption issue :-(<br>
> > ><br>
> > > I'm using GAMG+GMRES for in each linear iteration of SNES. The linear solver converges with CONVERGED_RTOL for the first 6 iterations and with CONVERGED_STEP_LENGTH after that. I'm still very confused about why this is happening. Any thoughts/ideas?<br>
> ><br>
> > Does this happen on one process? If so I would run in the debugger and track the variable to see everyplace the variable is changed, this would point to exactly what piece of code is changing the variable to this unexpected value.<br>
> ><br>
> > For example with lldb one can use watch <a href="http://lldb.llvm.org/tutorial.html" rel="noreferrer" target="_blank">http://lldb.llvm.org/tutorial.<wbr>html</a> to see each time a variable gets changed. Similar thing with gdb.<br>
> ><br>
> > The variable to watch is ksp->reason Once you get the hang of this it can take just a few minutes to track down the code that is making this unexpected value, though I understand if you haven't done it before it can be intimidating.<br>
> ><br>
> > Barry<br>
> ><br>
> > You can do the same thing in parallel (like on two processes) if you need to but it is more cumbersome since you need run multiple debuggers. You can have PETSc start up multiple debuggers with mpiexec -n 2 ./ex -start_in_debugger<br>
> ><br>
> ><br>
> ><br>
> ><br>
> > ><br>
> > > Thanks,<br>
> > > Harshad<br>
> > ><br>
> > > On Thu, Sep 8, 2016 at 11:26 PM, Barry Smith <<a href="mailto:bsmith@mcs.anl.gov" target="_blank">bsmith@mcs.anl.gov</a>> wrote:<br>
> > ><br>
> > > Install your MPI with --download-mpich as a PETSc ./configure option, this will eliminate all the MPICH valgrind errors. Then send as an attachment the resulting valgrind file.<br>
> > ><br>
> > > I do not 100 % trust any code that produces such valgrind errors.<br>
> > ><br>
> > > Barry<br>
> > ><br>
> > ><br>
> > ><br>
> > > > On Sep 8, 2016, at 10:12 PM, Harshad Sahasrabudhe <<a href="mailto:hsahasra@purdue.edu" target="_blank">hsahasra@purdue.edu</a>> wrote:<br>
> > > ><br>
> > > > Hi Barry,<br>
> > > ><br>
> > > > Thanks for the reply. My code is in C. I ran with Valgrind and found many "Conditional jump or move depends on uninitialized value(s)", "Invalid read" and "Use of uninitialized value" errors. I think all of them are from the libraries I'm using (LibMesh, Boost, MPI, etc.). I'm not really sure what I'm looking for in the Valgrind output. At the end of the file, I get:<br>
> > > ><br>
> > > > ==40223== More than 10000000 total errors detected. I'm not reporting any more.<br>
> > > > ==40223== Final error counts will be inaccurate. Go fix your program!<br>
> > > > ==40223== Rerun with --error-limit=no to disable this cutoff. Note<br>
> > > > ==40223== that errors may occur in your program without prior warning from<br>
> > > > ==40223== Valgrind, because errors are no longer being displayed.<br>
> > > ><br>
> > > > Can you give some suggestions on how I should proceed?<br>
> > > ><br>
> > > > Thanks,<br>
> > > > Harshad<br>
> > > ><br>
> > > > On Thu, Sep 8, 2016 at 1:59 PM, Barry Smith <<a href="mailto:bsmith@mcs.anl.gov" target="_blank">bsmith@mcs.anl.gov</a>> wrote:<br>
> > > ><br>
> > > > This is very odd. CONVERGED_STEP_LENGTH for KSP is very specialized and should never occur with GMRES.<br>
> > > ><br>
> > > > Can you run with valgrind to make sure there is no memory corruption? <a href="http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind" rel="noreferrer" target="_blank">http://www.mcs.anl.gov/petsc/d<wbr>ocumentation/faq.html#valgrind</a><br>
> > > ><br>
> > > > Is your code fortran or C?<br>
> > > ><br>
> > > > Barry<br>
> > > ><br>
> > > > > On Sep 8, 2016, at 10:38 AM, Harshad Sahasrabudhe <<a href="mailto:hsahasra@purdue.edu" target="_blank">hsahasra@purdue.edu</a>> wrote:<br>
> > > > ><br>
> > > > > Hi,<br>
> > > > ><br>
> > > > > I'm using GAMG + GMRES for my Poisson problem. The solver converges with KSP_CONVERGED_STEP_LENGTH at a residual of 9.773346857844e-02, which is much higher than what I need (I need a tolerance of at least 1E-8). I am not able to figure out which tolerance I need to set to avoid convergence due to CONVERGED_STEP_LENGTH.<br>
> > > > ><br>
> > > > > Any help is appreciated! Output of -ksp_view and -ksp_monitor:<br>
> > > > ><br>
> > > > > 0 KSP Residual norm 3.121347818142e+00<br>
> > > > > 1 KSP Residual norm 9.773346857844e-02<br>
> > > > > Linear solve converged due to CONVERGED_STEP_LENGTH iterations 1<br>
> > > > > KSP Object: 1 MPI processes<br>
> > > > > type: gmres<br>
> > > > > GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement<br>
> > > > > GMRES: happy breakdown tolerance 1e-30<br>
> > > > > maximum iterations=10000, initial guess is zero<br>
> > > > > tolerances: relative=1e-08, absolute=1e-50, divergence=10000<br>
> > > > > left preconditioning<br>
> > > > > using PRECONDITIONED norm type for convergence test<br>
> > > > > PC Object: 1 MPI processes<br>
> > > > > type: gamg<br>
> > > > > MG: type is MULTIPLICATIVE, levels=2 cycles=v<br>
> > > > > Cycles per PCApply=1<br>
> > > > > Using Galerkin computed coarse grid matrices<br>
> > > > > Coarse grid solver -- level ------------------------------<wbr>-<br>
> > > > > KSP Object: (mg_coarse_) 1 MPI processes<br>
> > > > > type: preonly<br>
> > > > > maximum iterations=1, initial guess is zero<br>
> > > > > tolerances: relative=1e-05, absolute=1e-50, divergence=10000<br>
> > > > > left preconditioning<br>
> > > > > using NONE norm type for convergence test<br>
> > > > > PC Object: (mg_coarse_) 1 MPI processes<br>
> > > > > type: bjacobi<br>
> > > > > block Jacobi: number of blocks = 1<br>
> > > > > Local solve is same for all blocks, in the following KSP and PC objects:<br>
> > > > > KSP Object: (mg_coarse_sub_) 1 MPI processes<br>
> > > > > type: preonly<br>
> > > > > maximum iterations=1, initial guess is zero<br>
> > > > > tolerances: relative=1e-05, absolute=1e-50, divergence=10000<br>
> > > > > left preconditioning<br>
> > > > > using NONE norm type for convergence test<br>
> > > > > PC Object: (mg_coarse_sub_) 1 MPI processes<br>
> > > > > type: lu<br>
> > > > > LU: out-of-place factorization<br>
> > > > > tolerance for zero pivot 2.22045e-14<br>
> > > > > using diagonal shift on blocks to prevent zero pivot [INBLOCKS]<br>
> > > > > matrix ordering: nd<br>
> > > > > factor fill ratio given 5, needed 1.91048<br>
> > > > > Factored matrix follows:<br>
> > > > > Mat Object: 1 MPI processes<br>
> > > > > type: seqaij<br>
> > > > > rows=284, cols=284<br>
> > > > > package used to perform factorization: petsc<br>
> > > > > total: nonzeros=7726, allocated nonzeros=7726<br>
> > > > > total number of mallocs used during MatSetValues calls =0<br>
> > > > > using I-node routines: found 133 nodes, limit used is 5<br>
> > > > > linear system matrix = precond matrix:<br>
> > > > > Mat Object: 1 MPI processes<br>
> > > > > type: seqaij<br>
> > > > > rows=284, cols=284<br>
> > > > > total: nonzeros=4044, allocated nonzeros=4044<br>
> > > > > total number of mallocs used during MatSetValues calls =0<br>
> > > > > not using I-node routines<br>
> > > > > linear system matrix = precond matrix:<br>
> > > > > Mat Object: 1 MPI processes<br>
> > > > > type: seqaij<br>
> > > > > rows=284, cols=284<br>
> > > > > total: nonzeros=4044, allocated nonzeros=4044<br>
> > > > > total number of mallocs used during MatSetValues calls =0<br>
> > > > > not using I-node routines<br>
> > > > > Down solver (pre-smoother) on level 1 ------------------------------<wbr>-<br>
> > > > > KSP Object: (mg_levels_1_) 1 MPI processes<br>
> > > > > type: chebyshev<br>
> > > > > Chebyshev: eigenvalue estimates: min = 0.195339, max = 4.10212<br>
> > > > > maximum iterations=2<br>
> > > > > tolerances: relative=1e-05, absolute=1e-50, divergence=10000<br>
> > > > > left preconditioning<br>
> > > > > using nonzero initial guess<br>
> > > > > using NONE norm type for convergence test<br>
> > > > > PC Object: (mg_levels_1_) 1 MPI processes<br>
> > > > > type: sor<br>
> > > > > SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1<br>
> > > > > linear system matrix = precond matrix:<br>
> > > > > Mat Object: () 1 MPI processes<br>
> > > > > type: seqaij<br>
> > > > > rows=9036, cols=9036<br>
> > > > > total: nonzeros=192256, allocated nonzeros=192256<br>
> > > > > total number of mallocs used during MatSetValues calls =0<br>
> > > > > not using I-node routines<br>
> > > > > Up solver (post-smoother) same as down solver (pre-smoother)<br>
> > > > > linear system matrix = precond matrix:<br>
> > > > > Mat Object: () 1 MPI processes<br>
> > > > > type: seqaij<br>
> > > > > rows=9036, cols=9036<br>
> > > > > total: nonzeros=192256, allocated nonzeros=192256<br>
> > > > > total number of mallocs used during MatSetValues calls =0<br>
> > > > > not using I-node routines<br>
> > > > ><br>
> > > > > Thanks,<br>
> > > > > Harshad<br>
> > > ><br>
> > > ><br>
> > ><br>
> > ><br>
> > > <valgrind.log.33199><br>
> ><br>
> ><br>
> ><br>
> ><br>
><br>
><br>
<br>
</div></div></blockquote></div><br></div>
</div></div></blockquote></div><br></div>