A 3D example of KSPSolve?

Shi Jin jinzishuai at yahoo.com
Sat Feb 10 17:54:53 CST 2007


I understand but this is our reality. I did the same
test on a cluster with infiniband:
MPI2/output:Average time for MPI_Barrier():
9.58443e-06
MPI2/output:Average time for zero size MPI_Send():
8.9407e-06
MPI4/output:Average time for MPI_Barrier():
1.93596e-05
MPI4/output:Average time for zero size MPI_Send():
1.0252e-05
MPI8/output:Average time for MPI_Barrier():
3.33786e-05
MPI8/output:Average time for zero size MPI_Send():
1.01328e-05
MPI16/output:Average time for MPI_Barrier():
4.53949e-05
MPI16/output:Average time for zero size MPI_Send():
9.87947e-06

The MPI_Barrier problem becomes much better.
However, when our code is tested on both clusters
(gigabit and infiniband), we don't see much difference
in their performance.  I attach the log file for a run
with 4 processes on this infiniband cluster.

Shi
--- Barry Smith <bsmith at mcs.anl.gov> wrote:

> 
>  gigabit ethernet has huge latencies; it is not
> good enough for a cluster.
> 
>   Barry
> 
> 
> On Sat, 10 Feb 2007, Shi Jin wrote:
> 
> > Here is the test on a linux cluster with gigabit
> > ethernet interconnect.
> > MPI2/output:Average time for MPI_Barrier():
> > 6.00338e-05
> > MPI2/output:Average time for zero size MPI_Send():
> > 5.40018e-05
> > MPI4/output:Average time for MPI_Barrier():
> 0.00806541
> > MPI4/output:Average time for zero size MPI_Send():
> > 6.07371e-05
> > MPI8/output:Average time for MPI_Barrier():
> 0.00805483
> > MPI8/output:Average time for zero size MPI_Send():
> > 6.97374e-05
> > 
> > Note MPI<N> indicates the run using N processes.
> > It seems that the MPI_Barrier takes a much longer
> time
> > do finish than one a SMP machine. Is this a load
> > balance issue or is it merely the show of slow
> > communication speed? 
> > Thanks.
> > Shi
> > --- Shi Jin <jinzishuai at yahoo.com> wrote:
> > 
> > > Yes. The results follow.
> > > --- Satish Balay <balay at mcs.anl.gov> wrote:
> > > 
> > > > Can you send the optupt from the following
> runs.
> > > You
> > > > can do this with
> > > > src/ksp/ksp/examples/tutorials/ex2.c - to keep
> > > > things simple.
> > > > 
> > > > petscmpirun -n 2 taskset -c 0,2 ./ex2
> -log_summary
> > > |
> > > > egrep \(MPI_Send\|MPI_Barrier\)
> > > Average time for MPI_Barrier(): 1.81198e-06
> > > Average time for zero size MPI_Send():
> 5.00679e-06
> > > > petscmpirun -n 2 taskset -c 0,4 ./ex2
> -log_summary
> > > |
> > > > egrep \(MPI_Send\|MPI_Barrier\)
> > > Average time for MPI_Barrier(): 2.00272e-06
> > > Average time for zero size MPI_Send():
> 4.05312e-06
> > > > petscmpirun -n 2 taskset -c 0,6 ./ex2
> -log_summary
> > > |
> > > > egrep \(MPI_Send\|MPI_Barrier\)
> > > Average time for MPI_Barrier(): 1.7643e-06
> > > Average time for zero size MPI_Send():
> 4.05312e-06
> > > > petscmpirun -n 2 taskset -c 0,8 ./ex2
> -log_summary
> > > |
> > > > egrep \(MPI_Send\|MPI_Barrier\)
> > > Average time for MPI_Barrier(): 2.00272e-06
> > > Average time for zero size MPI_Send():
> 4.05312e-06
> > > > petscmpirun -n 2 taskset -c 0,12 ./ex2
> > > -log_summary
> > > > | egrep \(MPI_Send\|MPI_Barrier\)
> > > Average time for MPI_Barrier(): 1.57356e-06
> > > Average time for zero size MPI_Send():
> 5.48363e-06
> > > > petscmpirun -n 2 taskset -c 0,14 ./ex2
> > > -log_summary
> > > > | egrep \(MPI_Send\|MPI_Barrier\)
> > > Average time for MPI_Barrier(): 2.00272e-06
> > > Average time for zero size MPI_Send():
> 4.52995e-06
> > > I also did 
> > >  petscmpirun -n 2 taskset -c 0,10 ./ex2
> -log_summary
> > > |
> > > egrep \(MPI_Send\|MPI_Barrier\)
> > > Average time for MPI_Barrier(): 5.00679e-06
> > > Average time for zero size MPI_Send():
> 3.93391e-06
> > > 
> > > 
> > > The results are not so different from each
> other.
> > > Also
> > > please note, the timing is not exact, some times
> I
> > > got
> > > O(1e-5) timings for all cases.
> > > I assume these numbers are pretty good, right?
> Does
> > > it
> > > indicate that the MPI communication on a SMP
> machine
> > > is very fast?
> > > I will do a similar test on a cluster and report
> it
> > > back to the list.
> > > 
> > > Shi
> > > 
> > > 
> > > 
> > > 
> > >  
> > >
> >
>
____________________________________________________________________________________
> > > Need Mail bonding?
> > > Go to the Yahoo! Mail Q&A for great tips from
> Yahoo!
> > > Answers users.
> > >
> >
>
http://answers.yahoo.com/dir/?link=list&sid=396546091
> > > 
> > > 
> > 
> > 
> > 
> >  
> >
>
____________________________________________________________________________________
> > We won't tell. Get more on shows you hate to love 
> > (and love to hate): Yahoo! TV's Guilty Pleasures
> list.
> > http://tv.yahoo.com/collections/265 
> > 
> > 
> 
> 



 
____________________________________________________________________________________
Any questions? Get answers on any topic at www.Answers.yahoo.com.  Try it now.
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: log-4-infiniband.txt
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20070210/9504bd0f/attachment.txt>


More information about the petsc-users mailing list