<div dir="ltr"><div class="gmail_quote"><div dir="ltr">On Fri, Jun 29, 2018 at 9:34 AM Vaclav Hapla <<a href="mailto:vaclav.hapla@erdw.ethz.ch">vaclav.hapla@erdw.ethz.ch</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><br>
<br>
> 22. 6. 2018 v 17:47, Smith, Barry F. <<a href="mailto:bsmith@mcs.anl.gov" target="_blank">bsmith@mcs.anl.gov</a>>:<br>
> <br>
> <br>
> <br>
>> On Jun 22, 2018, at 5:43 AM, Pierre Jolivet <<a href="mailto:pierre.jolivet@enseeiht.fr" target="_blank">pierre.jolivet@enseeiht.fr</a>> wrote:<br>
>> <br>
>> Hello,<br>
>> I’m solving a system using a MATSHELL and PCGAMG.<br>
>> The MPIAIJ Mat I’m giving to GAMG has a specific structure (inherited from the MATSHELL) I’d like to exploit during the solution phase when the smoother on the finest level is doing MatMults.<br>
>> <br>
>> Is there some way to:<br>
>> 1) decouple in -log_view the time spent in the MATSHELL MatMult and in the smoothers MatMult<br>
> <br>
> You can register a new event and then inside your MATSHELL MatMult() call PetscLogEventBegin/End on your new event.<br>
> <br>
> Note that the MatMult() like will still contain the time for your MatShell mult so you will need to subtract it off to get the time for your non-shell matmults.<br>
<br>
In PERMON, we sometimes have quite complicated hierarchy of wrapped matrices and want to measure MatMult{,Transpose,Add,TransposeAdd} separately for particular ones. Think e.g. of having additive MATCOMPOSITE wrapping multiplicative MATCOMPOSITE wrapping MATTRANSPOSE wrapping MATAIJ. You want to measure this MATAIJ instance's MatMult separately but you surely don't want to rewrite implementation of MatMult_Transpose or force yourself to use MATSHELL just to hang the events on MatMult*.<br>
<br>
We had a special wrapper type just adding some prefix to the events for the given object but this is not nice. What about adding a functionality to PetscLogEventBegin/End that would distinguish based on the first PetscObject's name or option prefix? Of course optionally not to break guys relying on current behavior - e.g. under something like -log_view_by_name. To me it's quite an elegant solution working for any PetscObject and any event.<br></blockquote><div><br></div><div>As people have pointed out, this would not work well for Events. However, this is exactly what stages are for.</div><div>Use separate stages for the different types of MatMult. I did this, for example, when looking at performance</div><div>on different MG levels.</div><div><br></div><div> Matt</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
I can do that if I get some upvotes.<br>
<br>
Vaclav<br>
<br>
> <br>
>> 2) hardwire a specific MatMult implementation for the smoother on the finest level<br>
> <br>
> In the latest release you do MatSetOperation() to override the normal matrix vector product with anything else you want. <br>
> <br>
>> <br>
>> Thanks in advance,<br>
>> Pierre<br>
>> <br>
>> PS : here is what I have right now,<br>
>> MatMult 118 1.0 1.0740e+02 1.6 1.04e+13 1.6 1.7e+06 6.1e+05 0.0e+00 47100 90 98 0 47100 90 98 0 81953703<br>
>> […]<br>
>> PCSetUp 2 1.0 8.6513e+00 1.0 1.01e+09 1.7 2.6e+05 4.0e+05 1.8e+02 5 0 14 10 66 5 0 14 10 68 94598<br>
>> PCApply 14 1.0 8.0373e+01 1.1 9.06e+12 1.6 1.3e+06 6.0e+05 2.1e+01 45 87 72 78 8 45 87 72 78 8 95365211 // I’m guessing a lot of time here is being wasted in doing inefficient MatMults on the finest level but this is only speculation<br>
>> <br>
>> Same code with -pc_type none -ksp_max_it 13,<br>
>> MatMult 14 1.0 1.2936e+01 1.7 1.35e+12 1.6 2.0e+05 6.1e+05 0.0e+00 15100 78 93 0 15100 78 93 0 88202079<br>
>> <br>
>> The grid itself is rather simple (two levels, extremely aggressive coarsening),<br>
>> type is MULTIPLICATIVE, levels=2 cycles=v<br>
>> KSP Object: (mg_coarse_) 1024 MPI processes<br>
>> linear system matrix = precond matrix:<br>
>> Mat Object: 1024 MPI processes<br>
>> type: mpiaij<br>
>> rows=775, cols=775<br>
>> total: nonzeros=1793, allocated nonzeros=1793<br>
>> <br>
>> linear system matrix followed by preconditioner matrix:<br>
>> Mat Object: 1024 MPI processes<br>
>> type: shell<br>
>> rows=1369307136, cols=1369307136<br>
>> Mat Object: 1024 MPI processes<br>
>> type: mpiaij<br>
>> rows=1369307136, cols=1369307136<br>
>> total: nonzeros=19896719360, allocated nonzeros=19896719360<br>
> <br>
<br>
</blockquote></div><br clear="all"><div><br></div>-- <br><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener</div><div><br></div><div><a href="http://www.caam.rice.edu/~mk51/" target="_blank">https://www.cse.buffalo.edu/~knepley/</a><br></div></div></div></div></div></div>