<div dir="ltr"><div><div>Hi Barry,<br><br></div>I was thinking along those lines when I first considered putting together an MKL-based matrix class, but I decided that I wanted to be able to support the standard MKL sparse BLAS routines as well as the inspector-executor model ones. My plan was to add some logic to check to see if a "matrix optimization" phase has been done, and use the inspector-executor routines (where possible) if so. Maybe it would be better to have two matrix classes (one for the standard MKL sparse BLAS, the other using the inspector-executor stuff), but that seemed to be adding more classes than necessary (recognizing that we also need a BAIJMKL class for the block CSR case).<br><br></div><div>Unfortunately, I note that, in my very limited experimentation so far, the standard sparse BLAS routines from MKL are slightly slower than the PETSc-provided kernels. Which is good for PETSc, but bad for showing any utility of my hacking on this front. =)<br></div><div><br></div>--Richard<br></div><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Feb 13, 2017 at 5:48 PM, Barry Smith <span dir="ltr"><<a href="mailto:bsmith@mcs.anl.gov" target="_blank">bsmith@mcs.anl.gov</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><br>
At this time I would hide the hints and just set reasonable ones and then have the MatAssemblyEnd for the new class provide the default hints and then call the optimize, but keep all of that stuff hidden from the PETSc API.<br>
<span class="HOEnZb"><font color="#888888"><br>
Barry<br>
</font></span><div class="HOEnZb"><div class="h5"><br>
> On Feb 13, 2017, at 6:03 PM, Richard Mills <<a href="mailto:richardtmills@gmail.com">richardtmills@gmail.com</a>> wrote:<br>
><br>
> Hi All,<br>
><br>
> I've finally gotten around to putting together a matrix class (called AIJMKL for now) that inherits from AIJ but uses some methods provided by Intel MKL. One thing I'd like to support is the "SpMV 2" sparse inspector-executor routines, detailed in the MKL reference manual starting at<br>
><br>
> <a href="https://software.intel.com/en-us/node/590105" rel="noreferrer" target="_blank">https://software.intel.com/en-<wbr>us/node/590105</a><br>
><br>
> The basic usage model is this: You set some hints (see the functions at <a href="https://software.intel.com/en-us/node/590120" rel="noreferrer" target="_blank">https://software.intel.com/en-<wbr>us/node/590120</a>) about how a matrix you have is going to be used, e.g., you call 'mkl_sparse_mv_hint' to set a guess about the number of upcoming matrix-vector multiplications that will be performed with this matrix. When you've set various hints, then you call 'mkl_sparse_optimize' to start an analysis operation in which MKL examines the sparsity structure and maybe does some internal things for optimizations. Then you can then apply mkl_sparse_d_mv to perform mat-vec operations.<br>
><br>
> I am wondering what a proper way to provide a PETSc interface to these is. I can think of a few ways to do this. The most straightforward way is probably.<br>
><br>
> * Add MatAIJMKLSetMVHint(), MatAIJMKLSetMMHint(), etc.<br>
> * Add MatAIJMKLOptimize() call.<br>
><br>
> I also will be adding a BAIJMKL class for BAIJ matrices, so I'd duplicate these. Does this option sound OK? Or do others think I should do something more general?<br>
><br>
> --Richard<br>
<br>
</div></div></blockquote></div><br></div>