<table cellspacing="0" cellpadding="0" border="0" ><tr><td valign="top" style="font: inherit;">Hello,<br><br>I am currently solving a 1.2 million by 1.2 million linear system using PETSc 2.3.3, Patch 13 using domain decomposition (iterative substructuring using a Krylov Subspace Solver). I'm using a 120 CPU cluster with InfiniBand interconnect; each node has 8 cores -- 2 quad-core Xeon CPUs (X5365) at 3.0 GHz, each node having 32 GB of RAM.<br><br>After running my code, I generate a log using the following:<br> CALL PetscLogPrintSummary(PETSC_COMM_WORLD,"log.txt",ierr)<br><br>When looking at the log output, I noticed that the peak and average number of Flops seems fairly low -- in the case of one run, a peak of 6.0e7 flops with an average value of 5.2e7. The exact log output
is:<br><br> Max Max/Min Avg Total <br>Time (sec): 5.283e+01 1.02357 5.169e+01<br>Objects: 2.600e+02 1.00000 2.600e+02<br>Flops: 3.187e+09 1.69853 2.721e+09 3.265e+11<br>Flops/sec: 6.165e+07 1.69865 5.264e+07
6.317e+09<br>Memory: 6.081e+07 1.39801 6.608e+09<br>MPI Messages: 1.067e+05 1.00000 1.067e+05 1.281e+07<br>MPI Message Lengths: 5.205e+08 1.00081 4.875e+03 6.245e+10<br>MPI Reductions: 1.898e+01 1.00000<br><br>Is my interpretation correct about this being a fairly low flop count? Does this mean there's an issue with my code?<br><br>I am attaching my log file<br><br>Thanks<br><br>Waad<br><br></td></tr></table><br>