Darius, <div> Thanks for the update. </div><div><br></div><div>Regards,</div><div>Krishna <br><br><div class="gmail_quote">On Tue, Oct 12, 2010 at 4:37 PM, Darius Buntinas <span dir="ltr"><<a href="mailto:buntinas@mcs.anl.gov">buntinas@mcs.anl.gov</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;"><br>
It looks like we're not calling the comm create hook everywhere we should, so we end up calling the comm destroy hook on something we didn't create. I've created a ticket for this.<br>
<br>
<a href="https://trac.mcs.anl.gov/projects/mpich2/ticket/1118" target="_blank">https://trac.mcs.anl.gov/projects/mpich2/ticket/1118</a><br>
<font color="#888888"><br>
-d<br>
</font><div><div></div><div class="h5"><br>
On Oct 12, 2010, at 1:11 PM, Krishna Chaitanya Kandalla wrote:<br>
<br>
> Hi Pavan, Darius,<br>
> Yes. The tests that I used were icbcast and icreduce, which are a part of the MPICH2 test suite. I have placed a copy of the valgrind output for the icreduce test at : <a href="http://www.cse.ohio-state.edu/~kandalla/tmp1/icreduce_valgrind" target="_blank">http://www.cse.ohio-state.edu/~kandalla/tmp1/icreduce_valgrind</a><br>
> Its quite a big file and I see a bunch of "invalid free()" statements there. Hope this gives you the information that you are looking for.<br>
><br>
> Thanks,<br>
> Krishna<br>
><br>
> On Tue, Oct 12, 2010 at 1:41 PM, Pavan Balaji <<a href="mailto:balaji@mcs.anl.gov">balaji@mcs.anl.gov</a>> wrote:<br>
><br>
> He did: icbcast and icreduce in the MPICH2 test suite :-).<br>
><br>
> -- Pavan<br>
><br>
><br>
> On 10/12/2010 12:20 PM, Darius Buntinas wrote:<br>
> Also, can you send us a test program?<br>
><br>
> Thanks,<br>
> -d<br>
><br>
> On Oct 12, 2010, at 12:09 PM, Darius Buntinas wrote:<br>
><br>
> Can you guys run it through valgrind?<br>
><br>
> Thanks,<br>
> -d<br>
><br>
> On Oct 12, 2010, at 12:04 PM, Krishna Chaitanya Kandalla wrote:<br>
><br>
> Hi MPICH2 Developers,<br>
> We found that if MPICH2-1.3rc2 is configured with --enable-nemesis-shm-collectives, then some of the inter-communicator collective tests in the MPICH2 test suite, such as icbcast and icreduce fail. It appears to be some form of memory corruption, given the nature of the error messages that we are seeing:<br>
><br>
> *** glibc detected *** ./icreduce: double free or corruption (!prev): 0x000000000413b720 ***<br>
> ======= Backtrace: =========<br>
> /lib64/libc.so.6[0x35a787230f]<br>
> /lib64/libc.so.6(cfree+0x4b)[0x35a787276b]<br>
> ./icreduce[0x438ae1]<br>
> ./icreduce[0x41bd57]<br>
> ./icreduce[0x41c085]<br>
> ./icreduce[0x41bf86]<br>
> ./icreduce[0x41c085]<br>
> ./icreduce[0x418ca7]<br>
> ./icreduce[0x404bf6]<br>
> ./icreduce[0x40228a]<br>
> /lib64/libc.so.6(__libc_start_main+0xf4)[0x35a781d994]<br>
> ./icreduce[0x401f69]<br>
> ======= Memory map: ========<br>
> 00400000-004b6000 r-xp 00000000 00:17 2693971 /home/kandalla/mpich2-1.3rc2/icreduce (deleted)<br>
> 006b6000-006b8000 rw-p 000b6000 00:17 2693971 /home/kandalla/mpich2-1.3rc2/icreduce (deleted)<br>
> 006b8000-006df000 rw-p 006b8000 00:00 0<br>
> 04136000-0415f000 rw-p 04136000 00:00 0 [heap]<br>
> 35a7400000-35a741c000 r-xp 00000000 fd:00 4391224 /lib64/<a href="http://ld-2.5.so" target="_blank">ld-2.5.so</a><br>
> 35a761b000-35a761c000 r--p 0001b000 fd:00 4391224 /lib64/<a href="http://ld-2.5.so" target="_blank">ld-2.5.so</a><br>
> 35a761c000-35a761d000 rw-p 0001c000 fd:00 4391224 /lib64/<a href="http://ld-2.5.so" target="_blank">ld-2.5.so</a><br>
> 35a7800000-35a794e000 r-xp 00000000 fd:00 4391225 /lib64/<a href="http://libc-2.5.so" target="_blank">libc-2.5.so</a><br>
> 35a794e000-35a7b4d000 ---p 0014e000 fd:00 4391225 /lib64/<a href="http://libc-2.5.so" target="_blank">libc-2.5.so</a><br>
> 35a7b4d000-35a7b51000 r--p 0014d000 fd:00 4391225 /lib64/<a href="http://libc-2.5.so" target="_blank">libc-2.5.so</a><br>
> 35a7b51000-35a7b52000 rw-p 00151000 fd:00 4391225 /lib64/<a href="http://libc-2.5.so" target="_blank">libc-2.5.so</a><br>
> 35a7b52000-35a7b57000 rw-p 35a7b52000 00:00 0<br>
> 35a8400000-35a8416000 r-xp 00000000 fd:00 4391230 /lib64/<a href="http://libpthread-2.5.so" target="_blank">libpthread-2.5.so</a><br>
> 35a8416000-35a8615000 ---p 00016000 fd:00 4391230 /lib64/<a href="http://libpthread-2.5.so" target="_blank">libpthread-2.5.so</a><br>
> 35a8615000-35a8616000 r--p 00015000 fd:00 4391230 /lib64/<a href="http://libpthread-2.5.so" target="_blank">libpthread-2.5.so</a><br>
> 35a8616000-35a8617000 rw-p 00016000 fd:00 4391230 /lib64/<a href="http://libpthread-2.5.so" target="_blank">libpthread-2.5.so</a><br>
> 35a8617000-35a861b000 rw-p 35a8617000 00:00 0<br>
> 35aa000000-35aa00d000 r-xp 00000000 fd:00 4391237 /lib64/libgcc_s-4.1.2-20080825.so.1<br>
> 35aa00d000-35aa20d000 ---p 0000d000 fd:00 4391237 /lib64/libgcc_s-4.1.2-20080825.so.1<br>
> 35aa20d000-35aa20e000 rw-p 0000d000 fd:00 4391237 /lib64/libgcc_s-4.1.2-20080825.sPMPI_Comm_split(400)..............:<br>
> MPIR_Comm_split_impl(88)..........:<br>
> MPIR_Allgather_impl(744)..........:<br>
> MPIR_Allgather(705)...............:<br>
> MPIR_Allgather_intra(177).........:<br>
> MPIC_Sendrecv(189)................:<br>
> MPIC_Wait(528)....................:<br>
> MPIDI_CH3I_Progress(334)..........:<br>
> MPID_nem_mpich2_blocking_recv(906):<br>
> MPID_nem_tcp_connpoll(1875).......:<br>
> state_commrdy_handler(1703).......:<br>
> MPID_nem_tcp_recv_handler(1682)...: Communication error with rank 4<br>
> MPID_nem_tcp_recv_handler(1582)...: socket closed<br>
><br>
> Please let us know if you feel any other clarification regarding this error.<br>
><br>
> Thanks,<br>
> Krishna<br>
><br>
><br>
> _______________________________________________<br>
> mpich-discuss mailing list<br>
> <a href="mailto:mpich-discuss@mcs.anl.gov">mpich-discuss@mcs.anl.gov</a><br>
> <a href="https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss" target="_blank">https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss</a><br>
><br>
><br>
> _______________________________________________<br>
> mpich-discuss mailing list<br>
> <a href="mailto:mpich-discuss@mcs.anl.gov">mpich-discuss@mcs.anl.gov</a><br>
> <a href="https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss" target="_blank">https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss</a><br>
><br>
> --<br>
> Pavan Balaji<br>
> <a href="http://www.mcs.anl.gov/~balaji" target="_blank">http://www.mcs.anl.gov/~balaji</a><br>
><br>
> _______________________________________________<br>
> mpich-discuss mailing list<br>
> <a href="mailto:mpich-discuss@mcs.anl.gov">mpich-discuss@mcs.anl.gov</a><br>
> <a href="https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss" target="_blank">https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss</a><br>
><br>
> _______________________________________________<br>
> mpich-discuss mailing list<br>
> <a href="mailto:mpich-discuss@mcs.anl.gov">mpich-discuss@mcs.anl.gov</a><br>
> <a href="https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss" target="_blank">https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss</a><br>
<br>
_______________________________________________<br>
mpich-discuss mailing list<br>
<a href="mailto:mpich-discuss@mcs.anl.gov">mpich-discuss@mcs.anl.gov</a><br>
<a href="https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss" target="_blank">https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss</a><br>
</div></div></blockquote></div><br></div>