<div dir="ltr">Based on what you suggested, I have done the following:<div><br></div><div>i) rerun the same problem without output. The ratios are still roughly the same. So it is not the problem of IO.</div><div><br></div><div>ii) rerun the program on a supercomputer (Stampede), instead of group cluster. the MPI_Barrier time got better:</div><div><br><div class="gmail_extra"><div class="gmail_extra">Average time to get PetscTime(): 0</div><div class="gmail_extra">Average time for MPI_Barrier(): 1.27792e-05</div><div class="gmail_extra">Average time for zero size MPI_Send(): 3.94508e-06</div><div class="gmail_extra"><br></div><div class="gmail_extra">the full petsc logsummary is here: <a href="https://googledrive.com/host/0BxEfb1tasJxhTjNTVXh4bmJmWlk" target="_blank">https://googledrive.com/host/0BxEfb1tasJxhTjNTVXh4bmJmWlk</a></div><div><br></div><div class="gmail_extra">iii) since the time ratios of VecDot (2.5) and MatMult (1.5) are still high, I rerun the program with ipm module. The IPM summary is here: <a href="https://drive.google.com/file/d/0BxEfb1tasJxhYXI0VkV0cjlLWUU/view?usp=sharing" target="_blank">https://drive.google.com/file/d/0BxEfb1tasJxhYXI0VkV0cjlLWUU/view?usp=sharing</a>. From this IPM reuslts, MPI_Allreduce takes 74% of MPI time. The communication by task figure (1st figure in p4) in above link showed that it is not well-balanced. Is this related to the hardware and network (which the users cannot control) or can I do something on my codes to improve?</div><div class="gmail_extra"><br></div><div class="gmail_extra">Thank you.</div><div class="gmail_extra"><br></div><div class="gmail_extra">Best,</div><div class="gmail_extra">Xiangdong</div><div><br></div><div class="gmail_extra">On Fri, Feb 5, 2016 at 10:34 PM, Barry Smith <span dir="ltr"><<a href="mailto:bsmith@mcs.anl.gov" target="_blank">bsmith@mcs.anl.gov</a>></span> wrote:<br></div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><br>
Make the same run with no IO and see if the numbers are much better and if the load balance is better.<br>
<div><div><br>
> On Feb 5, 2016, at 8:59 PM, Xiangdong <<a href="mailto:epscodes@gmail.com" target="_blank">epscodes@gmail.com</a>> wrote:<br>
><br>
> If I want to know whether only rank 0 is slow (since it may has more io) or actually a portion of cores are slow, what tools can I start with?<br>
><br>
> Thanks.<br>
><br>
> Xiangdong<br>
><br>
> On Fri, Feb 5, 2016 at 5:27 PM, Jed Brown <<a href="mailto:jed@jedbrown.org" target="_blank">jed@jedbrown.org</a>> wrote:<br>
> Matthew Knepley <<a href="mailto:knepley@gmail.com" target="_blank">knepley@gmail.com</a>> writes:<br>
> >> I attached the full summary. At the end, it has<br>
> >><br>
> >> Average time to get PetscTime(): 0<br>
> >> Average time for MPI_Barrier(): 8.3971e-05<br>
> >> Average time for zero size MPI_Send(): 7.16746e-06<br>
> >><br>
> >> Is it an indication of slow network?<br>
> >><br>
> ><br>
> > I think so. It takes nearly 100 microseconds to synchronize processes.<br>
><br>
> Edison with 65536 processes:<br>
> Average time for MPI_Barrier(): 4.23908e-05<br>
> Average time for zero size MPI_Send(): 2.46466e-06<br>
><br>
> Mira with 16384 processes:<br>
> Average time for MPI_Barrier(): 5.7075e-06<br>
> Average time for zero size MPI_Send(): 1.33179e-05<br>
><br>
> Titan with 131072 processes:<br>
> Average time for MPI_Barrier(): 0.000368595<br>
> Average time for zero size MPI_Send(): 1.71567e-05<br>
><br>
<br>
</div></div></blockquote></div><br></div></div></div>