<div dir="ltr"><div><div><div><div><div>Thanks, Barry,<br></div><div><br>Log info:<br><br></div>AIJ:<br><br>MatSolve 850 1.0 8.6543e+00 4.2 3.04e+09 1.8 0.0e+00 0.0e+00 0.0e+00 0 41 0 0 0 0 41 0 0 0 49594<br>MatLUFactorNum 25 1.0 1.7622e+00 2.0 2.04e+09 2.1 0.0e+00 0.0e+00 0.0e+00 0 26 0 0 0 0 26 0 0 0 153394<br>MatILUFactorSym 13 1.0 2.8002e-01 2.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0<br><br></div>BAIJ:<br><br>MatSolve 826 1.0 1.3016e+01 1.7 1.42e+10 1.8 0.0e+00 0.0e+00 0.0e+00 1 29 0 0 0 1 29 0 0 0 154617<br>MatLUFactorNum 25 1.0 1.5503e+01 2.0 3.55e+10 2.1 0.0e+00 0.0e+00 0.0e+00 1 67 0 0 0 1 67 0 0 0 303190<br>MatILUFactorSym 13 1.0 5.7561e-01 1.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0<br><br></div>It looks like both MatSolve and MatLUFactorNum are slower.<br><br></div>I will try your suggestions.<br><br></div>Fande<br></div><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Mar 6, 2017 at 4:14 PM, Barry Smith <span dir="ltr"><<a href="mailto:bsmith@mcs.anl.gov" target="_blank">bsmith@mcs.anl.gov</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><br>
Note also that if the 11 by 11 blocks are actually sparse (and you don't store all the zeros in the blocks in the AIJ format) then then AIJ non-block factorization involves less floating point operations and less memory access so can be faster than the BAIJ format, depending on "how sparse" the blocks are. If you actually "fill in" the 11 by 11 blocks with AIJ (with zeros maybe in certain locations) then the above is not true.<br>
<div class="HOEnZb"><div class="h5"><br>
<br>
> On Mar 6, 2017, at 5:10 PM, Barry Smith <<a href="mailto:bsmith@mcs.anl.gov">bsmith@mcs.anl.gov</a>> wrote:<br>
><br>
><br>
> This is because for block size 11 it is using calls to LAPACK/BLAS for the block operations instead of custom routines for that block size.<br>
><br>
> Here is what you need to do. For a good sized case run both with -log_view and check the time spent in<br>
> MatLUFactorNumeric, MatLUFactorSymbolic and in MatSolve for AIJ and BAIJ. If they have a different number of function calls then divide by the function call count to determine the time per function call.<br>
><br>
> This will tell you which routine needs to be optimized first either MatLUFactorNumeric or MatSolve. My guess is MatSolve.<br>
><br>
> So edit src/mat/impls/baij/seq/<wbr>baijsolvnat.c and copy the function MatSolve_SeqBAIJ_15_<wbr>NaturalOrdering_ver1() to a new function MatSolve_SeqBAIJ_11_<wbr>NaturalOrdering_ver1. Edit the new function for the block size of 11.<br>
><br>
> Now edit MatLUFactorNumeric_SeqBAIJ_N() so that if block size is 11 it uses the new routine something like.<br>
><br>
> if (both_identity) {<br>
> if (b->bs == 11)<br>
> C->ops->solve = MatSolve_SeqBAIJ_11_<wbr>NaturalOrdering_ver1;<br>
> } else {<br>
> C->ops->solve = MatSolve_SeqBAIJ_N_<wbr>NaturalOrdering;<br>
> }<br>
><br>
> Rerun and look at the new -log_view. Send all three -log_view to use at this point. If this optimization helps and now<br>
> MatLUFactorNumeric is the time sink you can do the process to MatLUFactorNumeric_SeqBAIJ_15_<wbr>NaturalOrdering() to make an 11 size block custom version.<br>
><br>
> Barry<br>
><br>
>> On Mar 6, 2017, at 4:32 PM, Kong, Fande <<a href="mailto:fande.kong@inl.gov">fande.kong@inl.gov</a>> wrote:<br>
>><br>
>><br>
>><br>
>> On Mon, Mar 6, 2017 at 3:27 PM, Patrick Sanan <<a href="mailto:patrick.sanan@gmail.com">patrick.sanan@gmail.com</a>> wrote:<br>
>> On Mon, Mar 6, 2017 at 1:48 PM, Kong, Fande <<a href="mailto:fande.kong@inl.gov">fande.kong@inl.gov</a>> wrote:<br>
>>> Hi All,<br>
>>><br>
>>> I am solving a nonlinear system whose Jacobian matrix has a block structure.<br>
>>> More precisely, there is a mesh, and for each vertex there are 11 variables<br>
>>> associated with it. I am using BAIJ.<br>
>>><br>
>>> I thought block ILU(k) should be more efficient than the point-wise ILU(k).<br>
>>> After some numerical experiments, I found that the block ILU(K) is much<br>
>>> slower than the point-wise version.<br>
>> Do you mean that it takes more iterations to converge, or that the<br>
>> time per iteration is greater, or both?<br>
>><br>
>> The number of iterations is very similar, but the timer per iteration is greater.<br>
>><br>
>><br>
>>><br>
>>> Any thoughts?<br>
>>><br>
>>> Fande,<br>
>><br>
><br>
<br>
</div></div></blockquote></div><br></div>