[petsc-dev] GAMG and custom MatMults in smoothers

Pierre Jolivet pierre.jolivet at enseeiht.fr
Fri Jun 22 05:43:11 CDT 2018


Hello,
I’m solving a system using a MATSHELL and PCGAMG.
The MPIAIJ Mat I’m giving to GAMG has a specific structure (inherited from the MATSHELL) I’d like to exploit during the solution phase when the smoother on the finest level is doing MatMults.

Is there some way to:
1) decouple in -log_view the time spent in the MATSHELL MatMult and in the smoothers MatMult
2) hardwire a specific MatMult implementation for the smoother on the finest level

Thanks in advance,
Pierre

PS : here is what I have right now,
MatMult              118 1.0 1.0740e+02 1.6 1.04e+13 1.6 1.7e+06 6.1e+05 0.0e+00 47100 90 98  0  47100 90 98  0 81953703
[…]
PCSetUp                2 1.0 8.6513e+00 1.0 1.01e+09 1.7 2.6e+05 4.0e+05 1.8e+02  5  0 14 10 66   5  0 14 10 68 94598
PCApply               14 1.0 8.0373e+01 1.1 9.06e+12 1.6 1.3e+06 6.0e+05 2.1e+01 45 87 72 78  8  45 87 72 78  8 95365211 // I’m guessing a lot of time here is being wasted in doing inefficient MatMults on the finest level but this is only speculation

Same code with -pc_type none -ksp_max_it 13,
MatMult               14 1.0 1.2936e+01 1.7 1.35e+12 1.6 2.0e+05 6.1e+05 0.0e+00 15100 78 93  0  15100 78 93  0 88202079

The grid itself is rather simple (two levels, extremely aggressive coarsening),
    type is MULTIPLICATIVE, levels=2 cycles=v
    KSP Object: (mg_coarse_) 1024 MPI processes
linear system matrix = precond matrix:
      Mat Object: 1024 MPI processes
        type: mpiaij
        rows=775, cols=775
        total: nonzeros=1793, allocated nonzeros=1793

  linear system matrix followed by preconditioner matrix:
  Mat Object: 1024 MPI processes
    type: shell
    rows=1369307136, cols=1369307136
  Mat Object: 1024 MPI processes
    type: mpiaij
    rows=1369307136, cols=1369307136
    total: nonzeros=19896719360, allocated nonzeros=19896719360


More information about the petsc-dev mailing list