[mpich-discuss] core 2 quad and other multiple core processors

Ariovaldo de Souza Junior ariovaldojunior at gmail.com
Wed Jul 2 16:12:03 CDT 2008


Hi Chong,

Yes, I'm a student. In the truth who will use mpi is my teacher, who will
run NAMD, for molecular dynamics. I was challenged about set it up this
cluster and I had no knowledge about Linux before start it. Once it running
now, I think that maybe I could go a bit more far. Theoretically my work is
over, the machines are working well. But yet I would like to know how to
extract the maximum performance of these computers, that have already proof
that they are good ones, once we had utilized them already for run Gaussian
03 molecular calculations.

I wanted to know a bit more because I love computers and now I was
introduced to this universe of clusters, I wanted to know a bit more. Just
it. I'm reading already the tips you gave me, even it is a bit complicated
to extract the information I want from there. thanks a lot for your
attention.

Ari.

2008/7/2 chong tan <chong_guan_tan at yahoo.com>:

>
> Ari,
>
> Are you a student ? Anyway, I like to point you to the answer of your
> problem:
>
> mpiexec -help
>
>
>
> or  look at your mpich2 packge, under www/www1, there is a mpiexec.html
>
>
>
> it is easier to give your the answer, but getting you to look for the
> answer is better.
>
>
>
>
>
> stan
>
>
> --- On *Wed, 7/2/08, Ariovaldo de Souza Junior <ariovaldojunior at gmail.com>
> * wrote:
>
> From: Ariovaldo de Souza Junior <ariovaldojunior at gmail.com>
> Subject: [mpich-discuss] core 2 quad and other multiple core processors
> To: mpich-discuss at mcs.anl.gov
> Date: Wednesday, July 2, 2008, 1:15 PM
>
>
> Hello everybody!
>
> I'm really a newbie on clustering, so I have some, let's say, stupid
> questions. When I'm starting a job like this "mpiexec -l -n 6 ./cpi" in my
> small cluster of (until now) 6 core 2 quad machines, I'm sending 1 process
> to each node, right? Assuming that I'm correct, each process will utilize
> only 1 core of each node? and how to make 1 process run utilizing the whole
> processing capacity of the processor, the 4 cores? is there a way to do
> this? or I'll always utilize just one processor for each process? if I
> change this submission to "mpiexec -l -n 24 ./cpi" then the same process
> will run 24 times, 4 times per node (maybe simultaneously) and one process
> per core, right?
>
> I'm asking all this because I think it is a bit strange to see the
> processing time increasing each time I put one more process to run, once in
> my mind it should be the contrary. I'll give some examples:
>
> mpiexec -n 1 ./cpi
> wall clock time = 0.000579
>
> mpiexec -n 2 ./cpi
> wall clock time = 0.002442
>
> mpiexec -n 3 ./cpi
> wall clock time = 0.004568
>
> mpiexec -n 4 ./cpi
> wall clock time = 0.005150
>
> mpiexec -n 5 ./cpi
> wall clock time = 0.008923
>
> mpiexec -n 6 ./cpi
> wall clock time = 0.009309
>
> mpiexec -n 12 ./cpi
> wall clock time = 0.019445
>
> mpiexec -n 18 ./cpi
> wall clock time = 0.032204
>
> mpiexec -n 24 ./cpi
> wall clock time = 0.045413
>
> mpiexec -n 48 ./cpi
> wall clock time = 0.089815
>
> mpiexec -n 96 ./cpi
> wall clock time = 0.218894
>
> mpiexec -n 192 ./cpi
> wall clock time = 0.492870
>
> So, as you all can see is that as more processes I add, more time it takes,
> what makes me think that mpi is performing this test 192 times in the end
> and due to this the time increased. Is that correct that mpi performed the
> same test 192? Or did it divide the process into 192 pieces, calculated and
> then gathered the results and mounted the output again? I really would like
> to understand this relationship processor # x process # x .
>
> I have the feeling that my questions are a bit "poor" and really from a
> newbie, but the answer will help me on utilizing other programs that will
> need mpi to run.
>
> Thanks to all!
>
> Ari - UFAM - Brazil
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/mpich-discuss/attachments/20080702/76235d02/attachment.htm>


More information about the mpich-discuss mailing list