On Wed, Oct 13, 2010 at 11:56 AM, Luke Bloy <span dir="ltr"><<a href="mailto:luke.bloy@gmail.com">luke.bloy@gmail.com</a>></span> wrote:<br><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
Hi,<br>
<br>
I've had some issues with petsc and slepc recently where functionality would stop working following a recompilation of my code, after unrelated code was changed. This suggested to me a memory leak somewhere.<br></blockquote>
<div><br></div><div>1) This error does not describe a leak.</div><div><br></div><div>2) This is definitely in OpenMPI, during MPI_Init(). I would send it to them</div><div><br></div><div>3) In order for valgrind to be more useful, run this using a debugging executable so we can see symbols</div>
<div><br></div><div>4) I do not get this on my machine. It looks like OpenMPI trying to be smart about multi-socket machines.</div><div><br></div><div> Matt</div><div><br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
After some investigation, it seems that there is a leak in petscInitialize, seemingly coming from openmpi. I'm attaching the output of valgrind and a basic executable calling petsciInitialize() and petscFinalize. I'm running on this an ubuntu 10.04 machine with petsc 3.0.0 and openMPI (1.4.1) installed from the repositiories. Although this problem is also evident on machines with petsc(3.0.0) and mpi(1.3) installed from source.<br>
<br>
Is this a problem others are seeing? what is a stable combination of petsc and mpi? how best should i proceed in tracking this down.<br>
<br>
Thanks for the input.<br><font color="#888888">
Luke</font></blockquote></div>-- <br>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener<br>