[petsc-users] summary of the bandwidth received with different number of MPI processes

Barry Smith bsmith at mcs.anl.gov
Sat Oct 31 12:17:37 CDT 2015


  Yes, just put the output from running with 1 2 etc processes in order into the file

> On Oct 31, 2015, at 11:41 AM, TAY wee-beng <zonexo at gmail.com> wrote:
> 
> Hi,
> 
> It's mentioned that for a batch sys, I have to:
> 
> 1. cd src/benchmarks/steams
> 2. make MPIVersion
> 3. submit MPIVersion to the batch system a number of times with 1, 2, 3, etc MPI processes collecting all of the output from the runs into the single file scaling.log.
> 4. copy scaling.log into the src/benchmarks/steams directory
> 5. ./process.py createfile ; process.py
> 
> So for 3, how do I collect all of the output from the runs into the single file scaling.log.
> 
> Should scaling.log look for this:
> 
> Number of MPI processes 3 Processor names  n12-06 n12-06 n12-06
> Triad:        27031.0419   Rate (MB/s)
> Number of MPI processes 6 Processor names  n12-06 n12-06 n12-06 n12-06 n12-06 n12-06
> Triad:        53517.8980   Rate (MB/s)
> 
> ...
> 
> 
> 
> -- 
> Thank you.
> 
> Yours sincerely,
> 
> TAY wee-beng
> 



More information about the petsc-users mailing list