[mpich-discuss] executing time
samantha lin
wl8150 at googlemail.com
Tue Jul 14 12:00:56 CDT 2009
Thanks Bob.
Now it's clear and I didn't know how the comm overhead could be.
On Tue, Jul 14, 2009 at 4:49 PM, bob ilgner <bobilgner at gmail.com> wrote:
> hi, late at night here and forgot to include my timings also done on a 2
> core PC:
>
> 1 Node(-n 1 i.e no comms)
> Master wall clock time 0.074823
>
> 2 Node(-n 2)
> Master wall clock time 0.041086
>
> 4 Node(-n 4)
> Master Wall clock time 0.047954
>
> 8 Node(-n 8)
> Master Wall clock time 0.062776
>
> Which is what I would expect from a 2 core machine. I imagine that there is
> an increase for n>2 as we have additional comms overhead.
>
>
>
> regards, bob
>
> On Tue, Jul 14, 2009 at 5:25 PM, bob ilgner <bobilgner at gmail.com> wrote:
>
>> Hi WL,
>>
>> Had a quick look at the code and note that you can NOT run this as a
>> serial process, i.e. using only 1 node. You will need to change code for
>> that. -n 1 does not run anything.
>>
>> I've added a few printf comments and wall clock timings to show when the
>> code is running, and not running, i.e. with -n 1 it does not run. Try it out
>> with these markers and see how it works out.
>>
>> Regards, bob
>>
>>
>>
>>
>> On Tue, Jul 14, 2009 at 4:00 PM, samantha lin <wl8150 at googlemail.com>wrote:
>>
>>> Hi Bob,
>>>
>>> Yes, that was the command I executed. Apart from '2' processes, I also
>>> tried a variety of numbers
>>> and found '-n 1' is the fastest. I also made another similar C program
>>> without using the MPI libraries.
>>> The result was faster than '-n 1'. I didn't put a timing marks in the
>>> program; instead, I use 'time' command
>>> prefixing the executing command, e.g. "time mpiexec -n 2 myprog". (should
>>> be okay?)
>>> I added my program to this email. Hope that helps.
>>>
>>> Regards,
>>> WL
>>>
>>> On Tue, Jul 14, 2009 at 2:25 PM, bob ilgner <bobilgner at gmail.com> wrote:
>>>
>>>> Hi WL,
>>>>
>>>> What mpiexec command are you using to run your program ? i.e. "mpiexec
>>>> -n 2 xprogy"
>>>> where xprogy is the name of your program ?
>>>>
>>>> What is your timing for the serial case and the timing for the multiple
>>>> core case ? Is it a short program that you can list here so we can have a
>>>> look at it ? Have you tried to place timing marks in the program to analyse
>>>> what is happening ?
>>>>
>>>> Regards, bob
>>>>
>>>>
>>>> On Tue, Jul 14, 2009 at 1:42 PM, samantha lin <wl8150 at googlemail.com>wrote:
>>>>
>>>>> Hi bob,
>>>>>
>>>>> It's running on a macbook pro(intel cpu 2 cores); just one laptop and
>>>>> filesystem
>>>>> is local.
>>>>>
>>>>> Regards,
>>>>> WL
>>>>> On Tue, Jul 14, 2009 at 5:37 AM, bob ilgner <bobilgner at gmail.com>wrote:
>>>>>
>>>>>> Hi WL,
>>>>>>
>>>>>> What hardware are you running the mpiexec on and what sort of process
>>>>>> is this ? A little description please.
>>>>>>
>>>>>> Regards, bob
>>>>>>
>>>>>> On Tue, Jul 14, 2009 at 12:49 AM, samantha lin <wl8150 at googlemail.com
>>>>>> > wrote:
>>>>>>
>>>>>>> Hi there,
>>>>>>> I am new to MPICH. I just finished a test program which does
>>>>>>> matrix multiplication. When I used 'mpiexec' to execute the
>>>>>>> program, I found that the more processes, the longer it takes.
>>>>>>> Not sure if the result is correct or not? Or I should do some
>>>>>>> things more to improve it?
>>>>>>>
>>>>>>> WL
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/mpich-discuss/attachments/20090714/e5d3482b/attachment.htm>
More information about the mpich-discuss
mailing list