<html><head><style type="text/css"><!-- DIV {margin:0px;} --></style></head><body><div style="font-family:times new roman, new york, times, serif;font-size:12pt"><DIV>I tried MPICH_NO_LOCAL option. communication time between the processes grew 400% with that</DIV>
<DIV>option.</DIV>
<DIV> </DIV>
<DIV>same 'nap' problem with this env var. </DIV>
<DIV> </DIV>
<DIV>There is an MPI_Barrier call early in my application, maybe this is the cause.</DIV>
<DIV> </DIV>
<DIV>tan</DIV>
<DIV><BR> </DIV>
<DIV style="FONT-FAMILY: times new roman, new york, times, serif; FONT-SIZE: 12pt"><BR>
<DIV style="FONT-FAMILY: arial, helvetica, sans-serif; FONT-SIZE: 13px"><FONT size=2 face=Tahoma>
<HR SIZE=1>
<B><SPAN style="FONT-WEIGHT: bold">From:</SPAN></B> Darius Buntinas <buntinas@mcs.anl.gov><BR><B><SPAN style="FONT-WEIGHT: bold">To:</SPAN></B> mpich-discuss@mcs.anl.gov<BR><B><SPAN style="FONT-WEIGHT: bold">Sent:</SPAN></B> Monday, July 13, 2009 11:47:38 AM<BR><B><SPAN style="FONT-WEIGHT: bold">Subject:</SPAN></B> Re: [mpich-discuss] version 1.1 strange behavior : all processes become idle for extensive period<BR></FONT><BR><BR>Is there a simpler example of this that you can send us? If nothing<BR>else, a binary would be ok.<BR><BR>Does the program that takes the 1 minute "nap" use threads? If so, how<BR>many threads does each process create?<BR><BR>Can you find out what the processes (or threads if it's multithreaded)<BR>are doing during this time? E.g., are they in an mpi call? Are they<BR>blocking on a mutex? If so, can you tell us what line number it's<BR>blocked on?<BR><BR>Can you try this without shared memory
by setting the environment<BR>variable MPICH_NO_LOCAL to 1 and see if you get the same problem?<BR> MPICH_NO_LOCAL=1 mpiexec -n 4 ...<BR><BR>Thanks,<BR>-d<BR><BR><BR><BR>On 07/13/2009 01:35 PM, chong tan wrote:<BR>> Sorry can't do that. The benchmark involves 2 things. One from my<BR>> customer which<BR>> I am not allowed to distribute. I may be able to get a limited<BR>> license of my product<BR>> for you to try, but I definately can not send source code.<BR>> <BR>> tan<BR>> <BR>> <BR>> ------------------------------------------------------------------------<BR>> *From:* Darius Buntinas <<A href="mailto:buntinas@mcs.anl.gov" ymailto="mailto:buntinas@mcs.anl.gov">buntinas@mcs.anl.gov</A>><BR>> *To:* <A href="mailto:mpich-discuss@mcs.anl.gov" ymailto="mailto:mpich-discuss@mcs.anl.gov">mpich-discuss@mcs.anl.gov</A><BR>> *Sent:* Monday, July 13, 2009 10:54:50 AM<BR>>
*Subject:* Re: [mpich-discuss] version 1.1 strange behavior : all<BR>> processes become idle for extensive period<BR>> <BR>> <BR>> Can you send us the benchmark you're using? This will help us figure<BR>> out what's going on.<BR>> <BR>> Thanks,<BR>> -d<BR>> <BR>> On 07/13/2009 12:36 PM, chong tan wrote:<BR>>><BR>>> thanks darius,<BR>>> <BR>>> When I did the comparison (or benchmarking), I have 2 identical source<BR>>> trees. Everything<BR>>> were recompiled group up and compiled/linked accordinglyto the version<BR>>> of MPICH2<BR>>> to be used.<BR>>> <BR>>> I have many tests, this is the only one showing this behavior, and is<BR>>> predictably repeatable.<BR>>> most of my tests are showing comaptible performance and many do better<BR>>> with 1.1.<BR>>> <BR>>> The 'weirdest' thing is the ~1 minute span where there is no
activity on<BR>>> the box at all, zipo<BR>>> activity except 'top', with machine load at around 0.12. I don't know<BR>>> how to explain this<BR>>> 'behavior', and I am extremely curious if anyone can explain this.<BR>>> <BR>>> I can't repeat this on AMD boxes as I don't have one that has only 32G<BR>>> of memory. I can't<BR>>> repeat this on Niagara box as thread multiple won't build.<BR>>> <BR>>> I will try to rebuild 1.1 without thread-multiple. Will keep you posted.<BR>>> <BR>>> Meanwhile, if anyone has any speculations on this, please bring them up.<BR>>> <BR>>> thanks<BR>>> tan<BR>>> <BR>>> ------------------------------------------------------------------------<BR>>> *From:* Darius Buntinas <<A href="mailto:buntinas@mcs.anl.gov" ymailto="mailto:buntinas@mcs.anl.gov">buntinas@mcs.anl.gov</A><BR>> <mailto:<A
href="mailto:buntinas@mcs.anl.gov" ymailto="mailto:buntinas@mcs.anl.gov">buntinas@mcs.anl.gov</A>>><BR>>> *To:* <A href="mailto:mpich-discuss@mcs.anl.gov" ymailto="mailto:mpich-discuss@mcs.anl.gov">mpich-discuss@mcs.anl.gov</A> <mailto:<A href="mailto:mpich-discuss@mcs.anl.gov" ymailto="mailto:mpich-discuss@mcs.anl.gov">mpich-discuss@mcs.anl.gov</A>><BR>>> *Sent:* Monday, July 13, 2009 8:30:19 AM<BR>>> *Subject:* Re: [mpich-discuss] version 1.1 strange behavior : all<BR>>> processes become idle for extensive period<BR>>><BR>>> Tan,<BR>>><BR>>> Did you just re-link the applications, or did you recompile them?<BR>>> Version 1.1 is most likely not binary compatible with 1.0.6, so you<BR>>> really need to recompile the application.<BR>>><BR>>> Next, don't use the --enable-threads=multiple flag when configuring<BR>>> mpich2. By default, mpich2 supports all thread
levels and will select<BR>>> the thread level at run time (depending on the parameters passed to<BR>>> MPI_Init_thread). By allowing the thread level to be selected<BR>>> automatically at run time, you'll avoid the overhead of thread safety<BR>>> when it's not needed, allowing your non-threaded applications to run<BR>> faster.<BR>>><BR>>> Let us know if either of these fixes the problem, especially if just<BR>>> removing the --enable-threads option fixes this.<BR>>><BR>>> Thanks,<BR>>> -d<BR>>><BR>>> On 07/10/2009 06:19 PM, chong tan wrote:<BR>>>> I am seeing this funny situation which I did not see on 1.0.6 and<BR>>>> 1.0.8. Some background:<BR>>>><BR>>>> machine : INTEL 4Xcore 2<BR>>>><BR>>>> running mpiexec -n 4<BR>>>><BR>>>> machine has 32G of mem.<BR>>>><BR>>>> when my
application runs, almost all memory are used. However, there<BR>>>> is no swapping.<BR>>>> I have exclusive use of the machine, so contention is not an issue.<BR>>>><BR>>>> issue #1 : processes take extra long to be initialized, compared to<BR>> 1.0.6<BR>>>> issue #2 : during the run, at time all of them will become idle at the<BR>>>> same time, for almost a<BR>>>> minute. We never observed this with 1.0.6<BR>>>><BR>>>><BR>>>> The codes are the same, only linked with different versions of MPICH2.<BR>>>><BR>>>> MPICH2 was built with --enable-threads=multiple for 1.1. without for<BR>>>> 1.0.6 or 1.0.8<BR>>>><BR>>>> MPI calls are all in the main application thread. I used only 4 MPI<BR>>>> functions :<BR>>>>
init(), Send(), Recv() and Barrier().<BR>>>><BR>>>><BR>>>><BR>>>> any suggestion ?<BR>>>><BR>>>> thanks<BR>>>> tan<BR>>>><BR>>>><BR>>>><BR>>>> <BR>>>><BR>>>><BR>>><BR>> <BR></DIV></DIV></div><br>
</body></html>