[petsc-dev] Kokkos/Crusher perforance
Mark Adams
mfadams at lbl.gov
Fri Jan 21 20:37:29 CST 2022
Here is one with 2M / GPU. Getting better.
On Fri, Jan 21, 2022 at 9:17 PM Barry Smith <bsmith at petsc.dev> wrote:
>
> Matt is correct, vectors are way too small.
>
> BTW: Now would be a good time to run some of the Report I benchmarks on
> Crusher to get a feel for the kernel launch times and performance on VecOps.
>
> Also Report 2.
>
> Barry
>
>
> On Jan 21, 2022, at 7:58 PM, Matthew Knepley <knepley at gmail.com> wrote:
>
> On Fri, Jan 21, 2022 at 6:41 PM Mark Adams <mfadams at lbl.gov> wrote:
>
>> I am looking at performance of a CG/Jacobi solve on a 3D Q2 Laplacian
>> (ex13) on one Crusher node (8 GPUs on 4 GPU sockets, MI250X or is it
>> MI200?).
>> This is with a 16M equation problem. GPU-aware MPI and non GPU-aware MPI
>> are similar (mat-vec is a little faster w/o, the total is about the same,
>> call it noise)
>>
>> I found that MatMult was about 3x faster using 8 cores/GPU, that is all
>> 64 cores on the node, then when using 1 core/GPU. With the same size
>> problem of course.
>> I was thinking MatMult should be faster with just one MPI process. Oh
>> well, worry about that later.
>>
>> The bigger problem, and I have observed this to some extent with the
>> Landau TS/SNES/GPU-solver on the V/A100s, is that the vector operations are
>> expensive or crazy expensive.
>> You can see (attached) and the times here that the solve is dominated by
>> not-mat-vec:
>>
>>
>> ------------------------------------------------------------------------------------------------------------------------
>> Event Count Time (sec) Flop
>> --- Global --- --- Stage ---- *Total GPU * - CpuToGpu - -
>> GpuToCpu - GPU
>> Max Ratio Max Ratio Max Ratio Mess AvgLen
>> Reduct %T %F %M %L %R %T %F %M %L %R *Mflop/s Mflop/s* Count Size
>> Count Size %F
>>
>> ---------------------------------------------------------------------------------------------------------------------------------------------------------------
>> 17:15 main= /gpfs/alpine/csc314/scratch/adams/petsc/src/snes/tests/data$
>> grep "MatMult 400" jac_out_00*5_8_gpuawaremp*
>> MatMult 400 1.0 *1.2507e+00* 1.3 1.34e+10 1.1 3.7e+05
>> 1.6e+04 0.0e+00 1 55 62 54 0 27 91100100 0 *668874 0* 0
>> 0.00e+00 0 0.00e+00 100
>> 17:15 main= /gpfs/alpine/csc314/scratch/adams/petsc/src/snes/tests/data$
>> grep "KSPSolve 2" jac_out_001*_5_8_gpuawaremp*
>> KSPSolve 2 1.0 *4.4173e+00* 1.0 1.48e+10 1.1 3.7e+05
>> 1.6e+04 1.2e+03 4 60 62 54 61 100100100100100 *208923 1094405* 0
>> 0.00e+00 0 0.00e+00 100
>>
>> Notes about flop counters here,
>> * that MatMult flops are not logged as GPU flops but something is logged
>> nonetheless.
>> * The GPU flop rate is 5x the total flop rate in KSPSolve :\
>> * I think these nodes have an FP64 peak flop rate of 200 Tflops, so we
>> are at < 1%.
>>
>
> This looks complicated, so just a single remark:
>
> My understanding of the benchmarking of vector ops led by Hannah was that
> you needed to be much
> bigger than 16M to hit peak. I need to get the tech report, but on 8 GPUs
> I would think you would be
> at 10% of peak or something right off the bat at these sizes. Barry, is
> that right?
>
> Thanks,
>
> Matt
>
>
>> Anway, not sure how to proceed but I thought I would share.
>> Maybe ask the Kokkos guys if the have looked at Crusher.
>>
>> Mark
>>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
> https://www.cse.buffalo.edu/~knepley/
> <http://www.cse.buffalo.edu/~knepley/>
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20220121/45f90ef6/attachment-0001.html>
-------------- next part --------------
DM Object: box 64 MPI processes
type: plex
box in 3 dimensions:
Number of 0-cells per rank: 274625 274625 274625 274625 274625 274625 274625 274625 274625 274625 274625 274625 274625 274625 274625 274625 274625 274625 274625 274625 274625 274625 274625 274625 274625 274625 274625 274625 274625 274625 274625 274625 274625 274625 274625 274625 274625 274625 274625 274625 274625 274625 274625 274625 274625 274625 274625 274625 274625 274625 274625 274625 274625 274625 274625 274625 274625 274625 274625 274625 274625 274625 274625 274625
Number of 1-cells per rank: 811200 811200 811200 811200 811200 811200 811200 811200 811200 811200 811200 811200 811200 811200 811200 811200 811200 811200 811200 811200 811200 811200 811200 811200 811200 811200 811200 811200 811200 811200 811200 811200 811200 811200 811200 811200 811200 811200 811200 811200 811200 811200 811200 811200 811200 811200 811200 811200 811200 811200 811200 811200 811200 811200 811200 811200 811200 811200 811200 811200 811200 811200 811200 811200
Number of 2-cells per rank: 798720 798720 798720 798720 798720 798720 798720 798720 798720 798720 798720 798720 798720 798720 798720 798720 798720 798720 798720 798720 798720 798720 798720 798720 798720 798720 798720 798720 798720 798720 798720 798720 798720 798720 798720 798720 798720 798720 798720 798720 798720 798720 798720 798720 798720 798720 798720 798720 798720 798720 798720 798720 798720 798720 798720 798720 798720 798720 798720 798720 798720 798720 798720 798720
Number of 3-cells per rank: 262144 262144 262144 262144 262144 262144 262144 262144 262144 262144 262144 262144 262144 262144 262144 262144 262144 262144 262144 262144 262144 262144 262144 262144 262144 262144 262144 262144 262144 262144 262144 262144 262144 262144 262144 262144 262144 262144 262144 262144 262144 262144 262144 262144 262144 262144 262144 262144 262144 262144 262144 262144 262144 262144 262144 262144 262144 262144 262144 262144 262144 262144 262144 262144
Labels:
celltype: 4 strata with value/size (0 (274625), 1 (811200), 4 (798720), 7 (262144))
depth: 4 strata with value/size (0 (274625), 1 (811200), 2 (798720), 3 (262144))
marker: 1 strata with value/size (1 (49530))
Face Sets: 3 strata with value/size (1 (16129), 3 (16129), 6 (16129))
Linear solve did not converge due to DIVERGED_ITS iterations 200
KSP Object: 64 MPI processes
type: cg
maximum iterations=200, initial guess is zero
tolerances: relative=1e-12, absolute=1e-50, divergence=10000.
left preconditioning
using UNPRECONDITIONED norm type for convergence test
PC Object: 64 MPI processes
type: jacobi
type DIAGONAL
linear system matrix = precond matrix:
Mat Object: 64 MPI processes
type: mpiaijkokkos
rows=133432831, cols=133432831
total: nonzeros=8477185319, allocated nonzeros=8477185319
total number of mallocs used during MatSetValues calls=0
not using I-node (on process 0) routines
Linear solve did not converge due to DIVERGED_ITS iterations 200
KSP Object: 64 MPI processes
type: cg
maximum iterations=200, initial guess is zero
tolerances: relative=1e-12, absolute=1e-50, divergence=10000.
left preconditioning
using UNPRECONDITIONED norm type for convergence test
PC Object: 64 MPI processes
type: jacobi
type DIAGONAL
linear system matrix = precond matrix:
Mat Object: 64 MPI processes
type: mpiaijkokkos
rows=133432831, cols=133432831
total: nonzeros=8477185319, allocated nonzeros=8477185319
total number of mallocs used during MatSetValues calls=0
not using I-node (on process 0) routines
Linear solve did not converge due to DIVERGED_ITS iterations 200
KSP Object: 64 MPI processes
type: cg
maximum iterations=200, initial guess is zero
tolerances: relative=1e-12, absolute=1e-50, divergence=10000.
left preconditioning
using UNPRECONDITIONED norm type for convergence test
PC Object: 64 MPI processes
type: jacobi
type DIAGONAL
linear system matrix = precond matrix:
Mat Object: 64 MPI processes
type: mpiaijkokkos
rows=133432831, cols=133432831
total: nonzeros=8477185319, allocated nonzeros=8477185319
total number of mallocs used during MatSetValues calls=0
not using I-node (on process 0) routines
************************************************************************************************************************
*** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document ***
************************************************************************************************************************
---------------------------------------------- PETSc Performance Summary: ----------------------------------------------
/gpfs/alpine/csc314/scratch/adams/petsc/src/snes/tests/data/../ex13 on a arch-olcf-crusher named crusher002 with 64 processors, by adams Fri Jan 21 21:30:42 2022
Using Petsc Development GIT revision: v3.16.3-665-g1012189b9a GIT Date: 2022-01-21 16:28:20 +0000
Max Max/Min Avg Total
Time (sec): 7.587e+02 1.000 7.587e+02
Objects: 2.158e+03 1.163 1.919e+03
Flop: 1.958e+11 1.036 1.936e+11 1.239e+13
Flop/sec: 2.581e+08 1.036 2.552e+08 1.633e+10
MPI Messages: 1.656e+04 3.672 9.423e+03 6.031e+05
MPI Message Lengths: 8.942e+08 2.047 7.055e+04 4.255e+10
MPI Reductions: 1.991e+03 1.000
Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract)
e.g., VecAXPY() for real vectors of length N --> 2N flop
and VecAXPY() for complex vectors of length N --> 8N flop
Summary of Stages: ----- Time ------ ----- Flop ------ --- Messages --- -- Message Lengths -- -- Reductions --
Avg %Total Avg %Total Count %Total Avg %Total Count %Total
0: Main Stage: 7.4508e+02 98.2% 4.9150e+12 39.7% 2.287e+05 37.9% 8.566e+04 46.0% 7.660e+02 38.5%
1: PCSetUp: 2.3487e-01 0.0% 0.0000e+00 0.0% 0.000e+00 0.0% 0.000e+00 0.0% 0.000e+00 0.0%
2: KSP Solve only: 1.3364e+01 1.8% 7.4764e+12 60.3% 3.744e+05 62.1% 6.133e+04 54.0% 1.206e+03 60.6%
------------------------------------------------------------------------------------------------------------------------
See the 'Profiling' chapter of the users' manual for details on interpreting output.
Phase summary info:
Count: number of times phase was executed
Time and Flop: Max - maximum over all processors
Ratio - ratio of maximum to minimum over all processors
Mess: number of messages sent
AvgLen: average message length (bytes)
Reduct: number of global reductions
Global: entire computation
Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop().
%T - percent time in this phase %F - percent flop in this phase
%M - percent messages in this phase %L - percent message lengths in this phase
%R - percent reductions in this phase
Total Mflop/s: 10e-6 * (sum of flop over all processors)/(max time over all processors)
GPU Mflop/s: 10e-6 * (sum of flop on GPU over all processors)/(max GPU time over all processors)
CpuToGpu Count: total number of CPU to GPU copies per processor
CpuToGpu Size (Mbytes): 10e-6 * (total size of CPU to GPU copies per processor)
GpuToCpu Count: total number of GPU to CPU copies per processor
GpuToCpu Size (Mbytes): 10e-6 * (total size of GPU to CPU copies per processor)
GPU %F: percent flops on GPU in this event
------------------------------------------------------------------------------------------------------------------------
Event Count Time (sec) Flop --- Global --- --- Stage ---- Total GPU - CpuToGpu - - GpuToCpu - GPU
Max Ratio Max Ratio Max Ratio Mess AvgLen Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s Mflop/s Count Size Count Size %F
---------------------------------------------------------------------------------------------------------------------------------------------------------------
--- Event Stage 0: Main Stage
PetscBarrier 6 1.0 2.5547e+00 1.0 0.00e+00 0.0 1.4e+04 2.6e+03 2.1e+01 0 0 2 0 1 0 0 6 0 3 0 0 0 0.00e+00 0 0.00e+00 0
BuildTwoSided 42 1.0 1.2079e+0138.1 0.00e+00 0.0 1.0e+04 4.0e+00 4.2e+01 1 0 2 0 2 1 0 5 0 5 0 0 0 0.00e+00 0 0.00e+00 0
BuildTwoSidedF 6 1.0 1.1776e+01102.0 0.00e+00 0.0 2.2e+03 1.6e+06 6.0e+00 1 0 0 8 0 1 0 1 18 1 0 0 0 0.00e+00 0 0.00e+00 0
MatMult 48589241.7 4.9614e+00 1.3 5.37e+10 1.0 1.9e+05 6.0e+04 2.0e+00 1 27 32 27 0 1 69 84 59 0 683454 0 1 4.48e-01 0 0.00e+00 100
MatAssemblyBegin 43 1.0 1.2036e+0118.5 0.00e+00 0.0 2.2e+03 1.6e+06 6.0e+00 1 0 0 8 0 1 0 1 18 1 0 0 0 0.00e+00 0 0.00e+00 0
MatAssemblyEnd 43 1.0 2.5511e+00 2.4 4.71e+06 0.0 0.0e+00 0.0e+00 9.0e+00 0 0 0 0 0 0 0 0 0 1 88 0 0 0.00e+00 0 0.00e+00 0
MatZeroEntries 3 1.0 3.4779e-03 1.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 0 0 0.00e+00 0 0.00e+00 0
MatView 1 1.0 3.6725e-03 2.3 0.00e+00 0.0 0.0e+00 0.0e+00 1.0e+00 0 0 0 0 0 0 0 0 0 0 0 0 0 0.00e+00 0 0.00e+00 0
KSPSetUp 1 1.0 2.0211e-02 5.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 0 0 0.00e+00 0 0.00e+00 0
KSPSolve 1 1.0 7.9097e+00 1.1 5.91e+10 1.0 1.9e+05 6.1e+04 6.0e+02 1 30 31 27 30 1 76 83 59 79 472611 2055307 1 4.48e-01 0 0.00e+00 100
SNESSolve 1 1.0 3.2269e+02 1.0 6.85e+10 1.0 1.9e+05 7.0e+04 6.1e+02 43 35 32 31 31 43 88 84 68 80 13442 2053264 3 1.83e+01 2 2.92e+01 86
SNESSetUp 1 1.0 1.1372e+02 1.0 0.00e+00 0.0 5.3e+03 7.7e+05 1.8e+01 15 0 1 10 1 15 0 2 21 2 0 0 0 0.00e+00 0 0.00e+00 0
SNESFunctionEval 2 1.0 4.3308e+01 1.1 6.33e+09 1.0 1.7e+03 5.1e+04 3.0e+00 6 3 0 0 0 6 8 1 0 0 9331 92565 3 3.46e+01 2 2.92e+01 0
SNESJacobianEval 2 1.0 5.7848e+02 1.0 1.21e+10 1.0 1.7e+03 2.2e+06 2.0e+00 76 6 0 9 0 78 16 1 19 0 1336 0 0 0.00e+00 2 2.92e+01 0
DMCreateInterp 1 1.0 8.6822e-02 1.0 8.29e+04 1.0 1.1e+03 8.0e+02 1.6e+01 0 0 0 0 1 0 0 0 0 2 61 0 0 0.00e+00 0 0.00e+00 0
DMCreateMat 1 1.0 1.1370e+02 1.0 0.00e+00 0.0 5.3e+03 7.7e+05 1.8e+01 15 0 1 10 1 15 0 2 21 2 0 0 0 0.00e+00 0 0.00e+00 0
Mesh Partition 1 1.0 9.0146e-02 1.0 0.00e+00 0.0 3.2e+02 1.1e+02 8.0e+00 0 0 0 0 0 0 0 0 0 1 0 0 0 0.00e+00 0 0.00e+00 0
Mesh Migration 1 1.0 4.8488e-02 1.4 0.00e+00 0.0 1.8e+03 8.3e+01 2.9e+01 0 0 0 0 1 0 0 1 0 4 0 0 0 0.00e+00 0 0.00e+00 0
DMPlexPartSelf 1 1.0 8.1141e-0457.7 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 0 0 0.00e+00 0 0.00e+00 0
DMPlexPartLblInv 1 1.0 1.0122e-03 4.3 0.00e+00 0.0 0.0e+00 0.0e+00 3.0e+00 0 0 0 0 0 0 0 0 0 0 0 0 0 0.00e+00 0 0.00e+00 0
DMPlexPartLblSF 1 1.0 4.2598e-03 1.6 0.00e+00 0.0 1.3e+02 5.6e+01 1.0e+00 0 0 0 0 0 0 0 0 0 0 0 0 0 0.00e+00 0 0.00e+00 0
DMPlexPartStrtSF 1 1.0 5.5499e-02 1.0 0.00e+00 0.0 6.3e+01 2.2e+02 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 0 0 0.00e+00 0 0.00e+00 0
DMPlexPointSF 1 1.0 3.6001e-02 1.9 0.00e+00 0.0 1.3e+02 2.7e+02 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 0 0 0.00e+00 0 0.00e+00 0
DMPlexInterp 19 1.0 1.3525e-03 1.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 0 0 0.00e+00 0 0.00e+00 0
DMPlexDistribute 1 1.0 1.7069e-01 1.1 0.00e+00 0.0 2.2e+03 9.7e+01 3.7e+01 0 0 0 0 2 0 0 1 0 5 0 0 0 0.00e+00 0 0.00e+00 0
DMPlexDistCones 1 1.0 1.0424e-03 1.1 0.00e+00 0.0 3.8e+02 1.4e+02 2.0e+00 0 0 0 0 0 0 0 0 0 0 0 0 0 0.00e+00 0 0.00e+00 0
DMPlexDistLabels 1 1.0 1.7272e-03 1.1 0.00e+00 0.0 9.0e+02 6.6e+01 2.4e+01 0 0 0 0 1 0 0 0 0 3 0 0 0 0.00e+00 0 0.00e+00 0
DMPlexDistField 1 1.0 4.5457e-02 1.4 0.00e+00 0.0 4.4e+02 5.9e+01 2.0e+00 0 0 0 0 0 0 0 0 0 0 0 0 0 0.00e+00 0 0.00e+00 0
DMPlexStratify 34 1.0 1.4763e-01 1.7 0.00e+00 0.0 0.0e+00 0.0e+00 8.0e+00 0 0 0 0 0 0 0 0 0 1 0 0 0 0.00e+00 0 0.00e+00 0
DMPlexSymmetrize 34 1.0 2.0135e-01 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 0 0 0.00e+00 0 0.00e+00 0
DMPlexPrealloc 1 1.0 1.1351e+02 1.0 0.00e+00 0.0 5.3e+03 7.7e+05 1.6e+01 15 0 1 10 1 15 0 2 21 2 0 0 0 0.00e+00 0 0.00e+00 0
DMPlexResidualFE 2 1.0 3.9761e+01 1.1 6.29e+09 1.0 0.0e+00 0.0e+00 0.0e+00 5 3 0 0 0 5 8 0 0 0 10129 0 0 0.00e+00 0 0.00e+00 0
DMPlexJacobianFE 2 1.0 5.7769e+02 1.0 1.21e+10 1.0 1.1e+03 3.2e+06 2.0e+00 76 6 0 8 0 77 16 0 18 0 1336 0 0 0.00e+00 0 0.00e+00 0
DMPlexInterpFE 1 1.0 8.6686e-02 1.0 8.29e+04 1.0 1.1e+03 8.0e+02 1.6e+01 0 0 0 0 1 0 0 0 0 2 61 0 0 0.00e+00 0 0.00e+00 0
SFSetGraph 46 1.0 7.3878e-03 7.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 0 0 0.00e+00 0 0.00e+00 0
SFSetUp 36 1.0 9.1987e-01 1.3 0.00e+00 0.0 1.9e+04 7.8e+04 3.6e+01 0 0 3 3 2 0 0 8 7 5 0 0 0 0.00e+00 0 0.00e+00 0
SFBcastBegin 68 1.0 2.4336e+0032.0 0.00e+00 0.0 1.4e+04 4.8e+04 0.0e+00 0 0 2 2 0 0 0 6 3 0 0 0 1 1.20e+00 4 5.83e+01 0
SFBcastEnd 68 1.0 4.0039e+00 9.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 0 0 2.48e-02 0 0.00e+00 0
SFReduceBegin 17 1.0 3.3664e-0138.1 4.19e+06 1.0 4.5e+03 3.2e+05 0.0e+00 0 0 1 3 0 0 0 2 7 0 793 0 2 3.34e+01 0 0.00e+00 100
SFReduceEnd 17 1.0 3.4163e+0028.5 9.91e+04 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 1 0 0 0.00e+00 0 0.00e+00 100
SFFetchOpBegin 2 1.0 1.6056e-02219.2 0.00e+00 0.0 5.6e+02 8.2e+05 0.0e+00 0 0 0 1 0 0 0 0 2 0 0 0 0 0.00e+00 0 0.00e+00 0
SFFetchOpEnd 2 1.0 1.2403e-01 2.8 0.00e+00 0.0 5.6e+02 8.2e+05 0.0e+00 0 0 0 1 0 0 0 0 2 0 0 0 0 0.00e+00 0 0.00e+00 0
SFCreateEmbed 9 1.0 8.5563e-0160.1 0.00e+00 0.0 2.3e+03 2.4e+03 0.0e+00 0 0 0 0 0 0 0 1 0 0 0 0 0 0.00e+00 0 0.00e+00 0
SFDistSection 9 1.0 8.0596e-02 3.2 0.00e+00 0.0 4.1e+03 2.3e+04 1.1e+01 0 0 1 0 1 0 0 2 0 1 0 0 0 0.00e+00 0 0.00e+00 0
SFSectionSF 17 1.0 3.0716e-01 3.1 0.00e+00 0.0 6.4e+03 7.4e+04 1.7e+01 0 0 1 1 1 0 0 3 2 2 0 0 0 0.00e+00 0 0.00e+00 0
SFRemoteOff 8 1.0 8.8914e-0122.0 0.00e+00 0.0 7.3e+03 4.3e+03 5.0e+00 0 0 1 0 0 0 0 3 0 1 0 0 0 0.00e+00 0 0.00e+00 0
SFPack 294 1.0 9.1895e-0135.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 0 2 5.96e-01 0 0.00e+00 0
SFUnpack 296 1.0 3.5822e-0112.8 4.29e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 758 0 0 2.48e-02 0 0.00e+00 100
VecTDot 401 1.0 1.5293e+00 2.0 1.68e+09 1.0 0.0e+00 0.0e+00 4.0e+02 0 1 0 0 20 0 2 0 0 52 69975 257651 0 0.00e+00 0 0.00e+00 100
VecNorm 201 1.0 1.1513e+00 4.8 8.43e+08 1.0 0.0e+00 0.0e+00 2.0e+02 0 0 0 0 10 0 1 0 0 26 46591 531611 0 0.00e+00 0 0.00e+00 100
VecCopy 2 1.0 3.1749e-03 7.7 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 0 0 0.00e+00 0 0.00e+00 0
VecSet 55 1.0 3.9489e-02 1.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 0 0 0.00e+00 0 0.00e+00 0
VecAXPY 400 1.0 4.4361e-0111.8 1.68e+09 1.0 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 0 2 0 0 0 240630 447948 0 0.00e+00 0 0.00e+00 100
VecAYPX 199 1.0 1.4928e+0048.1 8.35e+08 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 1 0 0 0 35575 40020 0 0.00e+00 0 0.00e+00 100
VecPointwiseMult 201 1.0 1.8526e-01 6.8 4.22e+08 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 1 0 0 0 144766 298879 0 0.00e+00 0 0.00e+00 100
VecScatterBegin 201 1.0 1.0198e+00 4.2 0.00e+00 0.0 1.9e+05 6.0e+04 2.0e+00 0 0 32 27 0 0 0 84 59 0 0 0 1 4.48e-01 0 0.00e+00 0
VecScatterEnd 201 1.0 2.6847e+00 5.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 0 0 0.00e+00 0 0.00e+00 0
DualSpaceSetUp 2 1.0 3.1979e-02 8.7 1.80e+03 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 4 0 0 0.00e+00 0 0.00e+00 0
FESetUp 2 1.0 2.6628e-01 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 0 0 0.00e+00 0 0.00e+00 0
PCSetUp 1 1.0 2.2663e-05 6.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 0 0 0.00e+00 0 0.00e+00 0
PCApply 201 1.0 5.3205e-01 1.5 4.22e+08 1.0 0.0e+00 0.0e+00 2.0e+00 0 0 0 0 0 0 1 0 0 0 50408 266707 0 0.00e+00 0 0.00e+00 100
--- Event Stage 1: PCSetUp
PCSetUp 1 1.0 2.6921e-01 1.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 100 0 0 0 0 0 0 0 0.00e+00 0 0.00e+00 0
--- Event Stage 2: KSP Solve only
MatMult 400 1.0 9.4293e+00 1.2 1.07e+11 1.0 3.7e+05 6.1e+04 0.0e+00 1 55 62 54 0 65 91100100 0 719222 0 0 0.00e+00 0 0.00e+00 100
MatView 2 1.0 4.9777e-03 3.0 0.00e+00 0.0 0.0e+00 0.0e+00 2.0e+00 0 0 0 0 0 0 0 0 0 0 0 0 0 0.00e+00 0 0.00e+00 0
KSPSolve 2 1.0 1.3931e+01 1.1 1.18e+11 1.0 3.7e+05 6.1e+04 1.2e+03 2 60 62 54 60 100100100100100 536679 2645387 0 0.00e+00 0 0.00e+00 100
SFPack 400 1.0 2.2779e-03 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 0 0 0.00e+00 0 0.00e+00 0
SFUnpack 400 1.0 1.1793e-04 2.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 0 0 0.00e+00 0 0.00e+00 0
VecTDot 802 1.0 3.1187e+00 2.1 3.36e+09 1.0 0.0e+00 0.0e+00 8.0e+02 0 2 0 0 40 14 3 0 0 67 68626 228080 0 0.00e+00 0 0.00e+00 100
VecNorm 402 1.0 1.8810e+00 3.7 1.69e+09 1.0 0.0e+00 0.0e+00 4.0e+02 0 1 0 0 20 6 1 0 0 33 57033 531550 0 0.00e+00 0 0.00e+00 100
VecCopy 4 1.0 7.5582e-0320.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 0 0 0.00e+00 0 0.00e+00 0
VecSet 4 1.0 3.8575e-0316.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 0 0 0.00e+00 0 0.00e+00 0
VecAXPY 800 1.0 8.6590e-0114.3 3.36e+09 1.0 0.0e+00 0.0e+00 0.0e+00 0 2 0 0 0 6 3 0 0 0 246557 447665 0 0.00e+00 0 0.00e+00 100
VecAYPX 398 1.0 2.1979e+0039.3 1.67e+09 1.0 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 6 1 0 0 0 48325 56771 0 0.00e+00 0 0.00e+00 100
VecPointwiseMult 402 1.0 3.6755e-01 8.7 8.43e+08 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 2 1 0 0 0 145939 316445 0 0.00e+00 0 0.00e+00 100
VecScatterBegin 400 1.0 1.5821e+0049.1 0.00e+00 0.0 3.7e+05 6.1e+04 0.0e+00 0 0 62 54 0 7 0100100 0 0 0 0 0.00e+00 0 0.00e+00 0
VecScatterEnd 400 1.0 5.7843e+00 6.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 13 0 0 0 0 0 0 0 0.00e+00 0 0.00e+00 0
PCApply 402 1.0 3.6785e-01 8.6 8.43e+08 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 2 1 0 0 0 145819 316445 0 0.00e+00 0 0.00e+00 100
---------------------------------------------------------------------------------------------------------------------------------------------------------------
Memory usage is given in bytes:
Object Type Creations Destructions Memory Descendants' Mem.
Reports information only for process 0.
--- Event Stage 0: Main Stage
Container 33 33 19008 0.
SNES 1 1 1540 0.
DMSNES 1 1 688 0.
Krylov Solver 1 1 1664 0.
DMKSP interface 1 1 656 0.
Matrix 76 76 1627827176 0.
Distributed Mesh 72 72 58971680 0.
DM Label 180 180 113760 0.
Quadrature 148 148 87616 0.
Mesh Transform 6 6 4536 0.
Index Set 833 833 4238868 0.
IS L to G Mapping 2 2 8590824 0.
Section 256 256 182272 0.
Star Forest Graph 179 179 195360 0.
Discrete System 121 121 116164 0.
Weak Form 122 122 75152 0.
GraphPartitioner 34 34 23392 0.
Vector 55 55 157137560 0.
Linear Space 5 5 3416 0.
Dual Space 26 26 24336 0.
FE Space 2 2 1576 0.
Viewer 2 1 840 0.
Preconditioner 1 1 872 0.
Field over DM 1 1 704 0.
--- Event Stage 1: PCSetUp
--- Event Stage 2: KSP Solve only
========================================================================================================================
Average time to get PetscTime(): 5.41e-08
Average time for MPI_Barrier(): 4.0096e-06
Average time for zero size MPI_Send(): 1.0329e-05
#PETSc Option Table entries:
-benchmark_it 2
-dm_distribute
-dm_mat_type aijkokkos
-dm_plex_box_faces 4,4,4
-dm_plex_box_lower 0,0,0
-dm_plex_box_upper 1,1,1
-dm_plex_dim 3
-dm_plex_simplex 0
-dm_refine 6
-dm_vec_type kokkos
-dm_view
-ksp_converged_reason
-ksp_max_it 200
-ksp_norm_type unpreconditioned
-ksp_rtol 1.e-12
-ksp_type cg
-ksp_view
-log_view
-mg_levels_esteig_ksp_max_it 10
-mg_levels_esteig_ksp_type cg
-mg_levels_ksp_chebyshev_esteig 0,0.05,0,1.05
-mg_levels_ksp_type chebyshev
-mg_levels_pc_type jacobi
-options_left
-pc_gamg_coarse_eq_limit 100
-pc_gamg_coarse_grid_layout_type compact
-pc_gamg_esteig_ksp_max_it 10
-pc_gamg_esteig_ksp_type cg
-pc_gamg_process_eq_limit 400
-pc_gamg_repartition false
-pc_gamg_reuse_interpolation true
-pc_gamg_square_graph 0
-pc_gamg_threshold -0.01
-pc_type jacobi
-petscpartitioner_simple_node_grid 1,1,1
-petscpartitioner_simple_process_grid 4,4,4
-petscpartitioner_type simple
-potential_petscspace_degree 2
-snes_max_it 1
-snes_rtol 1.e-8
-snes_type ksponly
-use_gpu_aware_mpi true
#End of PETSc Option Table entries
Compiled without FORTRAN kernels
Compiled with full precision matrices (default)
sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4
Configure options: --with-cc=cc --with-cxx=CC --with-fc=ftn --with-fortran-bindings=0 LIBS="-L/opt/cray/pe/mpich/8.1.12/gtl/lib -lmpi_gtl_hsa" --with-debugging=0 --with-mpiexec="srun -p batch -N 1 -A csc314_crusher -t 00:10:00" --with-hip --with-hipc=hipcc --download-hypre --download-hypre-configure-arguments=--enable-unified-memory --with-hip-arch=gfx90a --download-kokkos --download-kokkos-kernels PETSC_ARCH=arch-olcf-crusher
-----------------------------------------
Libraries compiled on 2022-01-21 19:20:56 on login2
Machine characteristics: Linux-5.3.18-59.16_11.0.39-cray_shasta_c-x86_64-with-glibc2.3.4
Using PETSc directory: /gpfs/alpine/csc314/scratch/adams/petsc
Using PETSc arch: arch-olcf-crusher
-----------------------------------------
Using C compiler: cc -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -Qunused-arguments -fvisibility=hidden -g -O3
Using Fortran compiler: ftn -fPIC
-----------------------------------------
Using include paths: -I/gpfs/alpine/csc314/scratch/adams/petsc/include -I/gpfs/alpine/csc314/scratch/adams/petsc/arch-olcf-crusher/include -I/opt/rocm-4.5.0/include
-----------------------------------------
Using C linker: cc
Using Fortran linker: ftn
Using libraries: -Wl,-rpath,/gpfs/alpine/csc314/scratch/adams/petsc/arch-olcf-crusher/lib -L/gpfs/alpine/csc314/scratch/adams/petsc/arch-olcf-crusher/lib -lpetsc -Wl,-rpath,/gpfs/alpine/csc314/scratch/adams/petsc/arch-olcf-crusher/lib -L/gpfs/alpine/csc314/scratch/adams/petsc/arch-olcf-crusher/lib -Wl,-rpath,/opt/rocm-4.5.0/lib -L/opt/rocm-4.5.0/lib -Wl,-rpath,/opt/cray/pe/mpich/8.1.12/gtl/lib -L/opt/cray/pe/mpich/8.1.12/gtl/lib -Wl,-rpath,/opt/cray/pe/gcc/8.1.0/snos/lib64 -L/opt/cray/pe/gcc/8.1.0/snos/lib64 -Wl,-rpath,/opt/cray/pe/libsci/21.08.1.2/CRAY/9.0/x86_64/lib -L/opt/cray/pe/libsci/21.08.1.2/CRAY/9.0/x86_64/lib -Wl,-rpath,/opt/cray/pe/mpich/8.1.12/ofi/cray/10.0/lib -L/opt/cray/pe/mpich/8.1.12/ofi/cray/10.0/lib -Wl,-rpath,/opt/cray/pe/dsmml/0.2.2/dsmml/lib -L/opt/cray/pe/dsmml/0.2.2/dsmml/lib -Wl,-rpath,/opt/cray/pe/pmi/6.0.16/lib -L/opt/cray/pe/pmi/6.0.16/lib -Wl,-rpath,/opt/cray/pe/cce/13.0.0/cce/x86_64/lib -L/opt/cray/pe/cce/13.0.0/cce/x86_64/lib -Wl,-rpath,/opt/cray/xpmem/2.3.2-2.2_1.16__g9ea452c.shasta/lib64 -L/opt/cray/xpmem/2.3.2-2.2_1.16__g9ea452c.shasta/lib64 -Wl,-rpath,/opt/cray/pe/cce/13.0.0/cce-clang/x86_64/lib/clang/13.0.0/lib/linux -L/opt/cray/pe/cce/13.0.0/cce-clang/x86_64/lib/clang/13.0.0/lib/linux -Wl,-rpath,/opt/cray/pe/gcc/8.1.0/snos/lib/gcc/x86_64-suse-linux/8.1.0 -L/opt/cray/pe/gcc/8.1.0/snos/lib/gcc/x86_64-suse-linux/8.1.0 -Wl,-rpath,/opt/cray/pe/cce/13.0.0/binutils/x86_64/x86_64-unknown-linux-gnu/lib -L/opt/cray/pe/cce/13.0.0/binutils/x86_64/x86_64-unknown-linux-gnu/lib -lHYPRE -lkokkoskernels -lkokkoscontainers -lkokkoscore -lhipsparse -lhipblas -lrocsparse -lrocsolver -lrocblas -lrocrand -lamdhip64 -ldl -lmpi_gtl_hsa -lmpifort_cray -lmpi_cray -ldsmml -lpmi -lpmi2 -lxpmem -lstdc++ -lpgas-shmem -lquadmath -lmodules -lfi -lcraymath -lf -lu -lcsup -lgfortran -lpthread -lgcc_eh -lm -lclang_rt.craypgo-x86_64 -lclang_rt.builtins-x86_64 -lquadmath -ldl -lmpi_gtl_hsa
-----------------------------------------
#PETSc Option Table entries:
-benchmark_it 2
-dm_distribute
-dm_mat_type aijkokkos
-dm_plex_box_faces 4,4,4
-dm_plex_box_lower 0,0,0
-dm_plex_box_upper 1,1,1
-dm_plex_dim 3
-dm_plex_simplex 0
-dm_refine 6
-dm_vec_type kokkos
-dm_view
-ksp_converged_reason
-ksp_max_it 200
-ksp_norm_type unpreconditioned
-ksp_rtol 1.e-12
-ksp_type cg
-ksp_view
-log_view
-mg_levels_esteig_ksp_max_it 10
-mg_levels_esteig_ksp_type cg
-mg_levels_ksp_chebyshev_esteig 0,0.05,0,1.05
-mg_levels_ksp_type chebyshev
-mg_levels_pc_type jacobi
-options_left
-pc_gamg_coarse_eq_limit 100
-pc_gamg_coarse_grid_layout_type compact
-pc_gamg_esteig_ksp_max_it 10
-pc_gamg_esteig_ksp_type cg
-pc_gamg_process_eq_limit 400
-pc_gamg_repartition false
-pc_gamg_reuse_interpolation true
-pc_gamg_square_graph 0
-pc_gamg_threshold -0.01
-pc_type jacobi
-petscpartitioner_simple_node_grid 1,1,1
-petscpartitioner_simple_process_grid 4,4,4
-petscpartitioner_type simple
-potential_petscspace_degree 2
-snes_max_it 1
-snes_rtol 1.e-8
-snes_type ksponly
-use_gpu_aware_mpi true
#End of PETSc Option Table entries
WARNING! There are options you set that were not used!
WARNING! could be spelling mistake, etc!
There are 14 unused database options. They are:
Option left: name:-mg_levels_esteig_ksp_max_it value: 10
Option left: name:-mg_levels_esteig_ksp_type value: cg
Option left: name:-mg_levels_ksp_chebyshev_esteig value: 0,0.05,0,1.05
Option left: name:-mg_levels_ksp_type value: chebyshev
Option left: name:-mg_levels_pc_type value: jacobi
Option left: name:-pc_gamg_coarse_eq_limit value: 100
Option left: name:-pc_gamg_coarse_grid_layout_type value: compact
Option left: name:-pc_gamg_esteig_ksp_max_it value: 10
Option left: name:-pc_gamg_esteig_ksp_type value: cg
Option left: name:-pc_gamg_process_eq_limit value: 400
Option left: name:-pc_gamg_repartition value: false
Option left: name:-pc_gamg_reuse_interpolation value: true
Option left: name:-pc_gamg_square_graph value: 0
Option left: name:-pc_gamg_threshold value: -0.01
More information about the petsc-dev
mailing list