[petsc-users] Read in sequential, solve in parallel
    Jed Brown 
    jed at 59A2.org
       
    Wed Sep 29 10:04:00 CDT 2010
    
    
  
The stage totals are aggregate. More info in the users manual. You can add
stages to distinguish between different phases of your program.
Your results look good except for the dot/norm timing but that won't make s
big difference.
Jed
On Sep 29, 2010 4:34 PM, "Moinier, Pierre (UK)" <
Pierre.Moinier at baesystems.com> wrote:
Jed,
You are right I built the matrix from a Poisson problem using a 5pts
discretization.
I have now found out why I wasn't getting the correct scaling. That was
due to a silly mistake in submitting my executable.
With 4 cores, I get:
--- Event Stage 0: Main Stage
MatMult             1633 1.0 6.9578e+00 1.2 3.67e+09 1.0 9.8e+03 8.0e+03
0.0e+00 41 43100 59  0  41 43100 59  0  2110
MatAssemblyBegin       1 1.0 1.8351e-01182.2 0.00e+00 0.0 0.0e+00
0.0e+00 2.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatAssemblyEnd         1 1.0 1.6289e-02 1.0 0.00e+00 0.0 1.2e+01 2.0e+03
7.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatLoad                1 1.0 6.6239e-01 1.0 0.00e+00 0.0 2.1e+01 2.3e+06
9.0e+00  4  0  0 36  0   4  0  0 36  0     0
VecDot              3266 1.0 2.3861e+00 1.6 1.63e+09 1.0 0.0e+00 0.0e+00
3.3e+03 11 19  0  0 66  11 19  0  0 66  2737
VecNorm             1634 1.0 3.8494e+00 1.2 8.17e+08 1.0 0.0e+00 0.0e+00
1.6e+03 23 10  0  0 33  23 10  0  0 33   849
VecCopy             1636 1.0 1.0704e+00 1.3 0.00e+00 0.0 0.0e+00 0.0e+00
0.0e+00 5 0 0 0 0 5 0 0 0 0 0
VecSet                 1 1.0 6.0201e-04 2.3 0.00e+00 0.0 0.0e+00 0.0e+00
0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecAXPY             3266 1.0 2.0010e+00 1.2 1.63e+09 1.0 0.0e+00 0.0e+00
0.0e+00 11 19  0  0  0  11 19  0  0  0  3264
VecAYPX             1632 1.0 8.4769e-01 1.4 8.16e+08 1.0 0.0e+00 0.0e+00
0.0e+00  4 10  0  0  0   4 10  0  0  0  3850
VecAssemblyBegin       1 1.0 3.3454e-02477.3 0.00e+00 0.0 0.0e+00
0.0e+00 3.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecAssemblyEnd         1 1.0 3.0994e-06 1.0 0.00e+00 0.0 0.0e+00 0.0e+00
0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecLoad                1 1.0 7.2319e-02 1.0 0.00e+00 0.0 3.0e+00 2.0e+06
4.0e+00  0  0  0  5  0   0  0  0  5  0     0
VecScatterBegin     1633 1.0 2.4417e-02 2.3 0.00e+00 0.0 9.8e+03 8.0e+03
0.0e+00  0  0100 59  0   0  0100 59  0     0
VecScatterEnd       1633 1.0 1.0537e+0024.4 0.00e+00 0.0 0.0e+00 0.0e+00
0.0e+00 3 0 0 0 0 3 0 0 0 0 0
KSPSetup               1 1.0 2.6400e-03 1.3 0.00e+00 0.0 0.0e+00 0.0e+00
0.0e+00 0 0 0 0 0 0 0 0 0 0 0
KSPSolve               1 1.0 1.5425e+01 1.0 8.57e+09 1.0 9.8e+03 8.0e+03
4.9e+03 95100100 59 99  95100100 59100  2222
PCSetUp 1 1.0 0.0000e+00 0.0 0.00e+00 0.0 0.0e+00 0.0e+00
0.0e+00 0 0 0 0 0 0 ...
PCApply             1634 1.0 1.0700e+00 1.3 0.00e+00 0.0 0.0e+00 0.0e+00
0.0e+00 5 0 0 0 0 5 0 0 0 0 0
For a single core, I was getting 4.4828e+01 for KSPSolve.
I am correct to assume that what is listed as "Event Stage 0: Main
Stage" is common to each core?
Finally, what is the meaning of "Event Stage 0: Main Stage"
Cheers,
       -Pierre.
********************************************************************
This email and any attachmen...
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20100929/7a5ca719/attachment-0001.htm>
    
    
More information about the petsc-users
mailing list