A 3D example of KSPSolve?

Barry Smith bsmith at mcs.anl.gov
Sat Feb 10 17:03:49 CST 2007


 gigabit ethernet has huge latencies; it is not
good enough for a cluster.

  Barry


On Sat, 10 Feb 2007, Shi Jin wrote:

> Here is the test on a linux cluster with gigabit
> ethernet interconnect.
> MPI2/output:Average time for MPI_Barrier():
> 6.00338e-05
> MPI2/output:Average time for zero size MPI_Send():
> 5.40018e-05
> MPI4/output:Average time for MPI_Barrier(): 0.00806541
> MPI4/output:Average time for zero size MPI_Send():
> 6.07371e-05
> MPI8/output:Average time for MPI_Barrier(): 0.00805483
> MPI8/output:Average time for zero size MPI_Send():
> 6.97374e-05
> 
> Note MPI<N> indicates the run using N processes.
> It seems that the MPI_Barrier takes a much longer time
> do finish than one a SMP machine. Is this a load
> balance issue or is it merely the show of slow
> communication speed? 
> Thanks.
> Shi
> --- Shi Jin <jinzishuai at yahoo.com> wrote:
> 
> > Yes. The results follow.
> > --- Satish Balay <balay at mcs.anl.gov> wrote:
> > 
> > > Can you send the optupt from the following runs.
> > You
> > > can do this with
> > > src/ksp/ksp/examples/tutorials/ex2.c - to keep
> > > things simple.
> > > 
> > > petscmpirun -n 2 taskset -c 0,2 ./ex2 -log_summary
> > |
> > > egrep \(MPI_Send\|MPI_Barrier\)
> > Average time for MPI_Barrier(): 1.81198e-06
> > Average time for zero size MPI_Send(): 5.00679e-06
> > > petscmpirun -n 2 taskset -c 0,4 ./ex2 -log_summary
> > |
> > > egrep \(MPI_Send\|MPI_Barrier\)
> > Average time for MPI_Barrier(): 2.00272e-06
> > Average time for zero size MPI_Send(): 4.05312e-06
> > > petscmpirun -n 2 taskset -c 0,6 ./ex2 -log_summary
> > |
> > > egrep \(MPI_Send\|MPI_Barrier\)
> > Average time for MPI_Barrier(): 1.7643e-06
> > Average time for zero size MPI_Send(): 4.05312e-06
> > > petscmpirun -n 2 taskset -c 0,8 ./ex2 -log_summary
> > |
> > > egrep \(MPI_Send\|MPI_Barrier\)
> > Average time for MPI_Barrier(): 2.00272e-06
> > Average time for zero size MPI_Send(): 4.05312e-06
> > > petscmpirun -n 2 taskset -c 0,12 ./ex2
> > -log_summary
> > > | egrep \(MPI_Send\|MPI_Barrier\)
> > Average time for MPI_Barrier(): 1.57356e-06
> > Average time for zero size MPI_Send(): 5.48363e-06
> > > petscmpirun -n 2 taskset -c 0,14 ./ex2
> > -log_summary
> > > | egrep \(MPI_Send\|MPI_Barrier\)
> > Average time for MPI_Barrier(): 2.00272e-06
> > Average time for zero size MPI_Send(): 4.52995e-06
> > I also did 
> >  petscmpirun -n 2 taskset -c 0,10 ./ex2 -log_summary
> > |
> > egrep \(MPI_Send\|MPI_Barrier\)
> > Average time for MPI_Barrier(): 5.00679e-06
> > Average time for zero size MPI_Send(): 3.93391e-06
> > 
> > 
> > The results are not so different from each other.
> > Also
> > please note, the timing is not exact, some times I
> > got
> > O(1e-5) timings for all cases.
> > I assume these numbers are pretty good, right? Does
> > it
> > indicate that the MPI communication on a SMP machine
> > is very fast?
> > I will do a similar test on a cluster and report it
> > back to the list.
> > 
> > Shi
> > 
> > 
> > 
> > 
> >  
> >
> ____________________________________________________________________________________
> > Need Mail bonding?
> > Go to the Yahoo! Mail Q&A for great tips from Yahoo!
> > Answers users.
> >
> http://answers.yahoo.com/dir/?link=list&sid=396546091
> > 
> > 
> 
> 
> 
>  
> ____________________________________________________________________________________
> We won't tell. Get more on shows you hate to love 
> (and love to hate): Yahoo! TV's Guilty Pleasures list.
> http://tv.yahoo.com/collections/265 
> 
> 




More information about the petsc-users mailing list