[petsc-users] summary of the bandwidth received with different number of MPI processes

Barry Smith bsmith at mcs.anl.gov
Sun Nov 1 10:11:47 CST 2015


  Just plot the bandwidth yourself using gunplot or Matlab or something.

  Also you might benefit from using process binding http://www.mcs.anl.gov/petsc/documentation/faq.html#computers

> On Oct 31, 2015, at 11:26 PM, TAY wee-beng <zonexo at gmail.com> wrote:
> 
> 
> On 1/11/2015 1:17 AM, Barry Smith wrote:
>>   Yes, just put the output from running with 1 2 etc processes in order into the file
> Hi,
> 
> I just did but I got some errors.
> 
> The scaling.log file is:
> 
> Number of MPI processes 3 Processor names  n12-06 n12-06 n12-06
> Triad:        27031.0419   Rate (MB/s)
> Number of MPI processes 6 Processor names  n12-06 n12-06 n12-06 n12-06 n12-06 n12-06
> Triad:        53517.8980   Rate (MB/s)
> Number of MPI processes 12 Processor names  n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06
> Triad:        53162.5346   Rate (MB/s)
> Number of MPI processes 24 Processor names  n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06
> Triad:       101455.6581   Rate (MB/s)
> Number of MPI processes 48 Processor names  n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07
> Triad:       115575.8960   Rate (MB/s)
> Number of MPI processes 96 Processor names  n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07
> Triad:       223742.1796   Rate (MB/s)
> Number of MPI processes 192 Processor names  n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10
> Triad:       436940.9859   Rate (MB/s)
> 
> When I tried to run "./process.py createfile ; process.py", I got
> 
> np  speedup
> Traceback (most recent call last):
>  File "./process.py", line 110, in <module>
>    process(len(sys.argv)-1)
>  File "./process.py", line 34, in process
>    speedups[sizes] = triads[sizes]/triads[1]
> KeyError: 1
> Traceback (most recent call last):
>  File "./process.py", line 110, in <module>
>    process(len(sys.argv)-1)
>  File "./process.py", line 34, in process
>    speedups[sizes] = triads[sizes]/triads[1]
> KeyError: 1
> 
> How can I solve it? Thanks.
> 
>>> On Oct 31, 2015, at 11:41 AM, TAY wee-beng <zonexo at gmail.com> wrote:
>>> 
>>> Hi,
>>> 
>>> It's mentioned that for a batch sys, I have to:
>>> 
>>> 1. cd src/benchmarks/steams
>>> 2. make MPIVersion
>>> 3. submit MPIVersion to the batch system a number of times with 1, 2, 3, etc MPI processes collecting all of the output from the runs into the single file scaling.log.
>>> 4. copy scaling.log into the src/benchmarks/steams directory
>>> 5. ./process.py createfile ; process.py
>>> 
>>> So for 3, how do I collect all of the output from the runs into the single file scaling.log.
>>> 
>>> Should scaling.log look for this:
>>> 
>>> Number of MPI processes 3 Processor names  n12-06 n12-06 n12-06
>>> Triad:        27031.0419   Rate (MB/s)
>>> Number of MPI processes 6 Processor names  n12-06 n12-06 n12-06 n12-06 n12-06 n12-06
>>> Triad:        53517.8980   Rate (MB/s)
>>> 
>>> ...
>>> 
>>> 
>>> 
>>> -- 
>>> Thank you.
>>> 
>>> Yours sincerely,
>>> 
>>> TAY wee-beng
>>> 
> 



More information about the petsc-users mailing list