Performance Log Question
Barry Smith
bsmith at mcs.anl.gov
Thu Jul 31 20:27:22 CDT 2008
From the file, it seems you never make an nontrivial PETSc calls.
Do you get the same
log if you just run with -log_summary?
Barry
Event Count Time (sec) Flops/
sec --- Global --- --- Stage --- Total
Max Ratio Max Ratio Max Ratio Mess Avg
len Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s
------------------------------------------------------------------------------------------------------------------------
--- Event Stage 0: Main Stage
------------------------------------------------------------------------------------------------------------------------
On Jul 31, 2008, at 4:33 PM, Waad Subber wrote:
> Hello,
>
> I am currently solving a 1.2 million by 1.2 million linear system
> using PETSc 2.3.3, Patch 13 using domain decomposition (iterative
> substructuring using a Krylov Subspace Solver). I'm using a 120 CPU
> cluster with InfiniBand interconnect; each node has 8 cores -- 2
> quad-core Xeon CPUs (X5365) at 3.0 GHz, each node having 32 GB of RAM.
>
> After running my code, I generate a log using the following:
> CALL PetscLogPrintSummary(PETSC_COMM_WORLD,"log.txt",ierr)
>
> When looking at the log output, I noticed that the peak and average
> number of Flops seems fairly low -- in the case of one run, a peak
> of 6.0e7 flops with an average value of 5.2e7. The exact log output
> is:
>
> Max Max/Min Avg Total
> Time (sec): 5.283e+01 1.02357 5.169e+01
> Objects: 2.600e+02 1.00000 2.600e+02
> Flops: 3.187e+09 1.69853 2.721e+09 3.265e+11
> Flops/sec: 6.165e+07 1.69865 5.264e+07 6.317e+09
> Memory: 6.081e+07 1.39801 6.608e+09
> MPI Messages: 1.067e+05 1.00000 1.067e+05 1.281e+07
> MPI Message Lengths: 5.205e+08 1.00081 4.875e+03 6.245e+10
> MPI Reductions: 1.898e+01 1.00000
>
> Is my interpretation correct about this being a fairly low flop
> count? Does this mean there's an issue with my code?
>
> I am attaching my log file
>
> Thanks
>
> Waad
>
>
> <log.txt>
More information about the petsc-users
mailing list