<div dir="ltr">You are in fact right, it is the same speedup of approximatively 2.5x (with 2 ranks), my brain rounded up to 3. (This was just a test done in 10 min on my Workstation, so no pretence to be definite, I just wanted to have an indication).<div><br></div><div>As you say, I am using OpenBLAS, so I wouldn't be surprised of those results. If/when I use MKL (or something similar), I really do not expect such an improvement).</div><div><br></div><div>Since you seem interested (if you are interested, I can give you all the details): the comparison I make, is with "petscxx" which is my template code (which uses a single loop) using AVX (I force PETSc to align the memory to 32 bit boundary and then I use packets of 4 doubles). Also notice that I use vectors with nice lengths, so there is no need to "peel" the end of the loop. The "PETSc" simulation is using PETSc's VecMAXPY.</div><br><div class="gmail_quote"><div dir="ltr">On Tue, 4 Apr 2017 at 19:12 Barry Smith <<a href="mailto:bsmith@mcs.anl.gov">bsmith@mcs.anl.gov</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><br class="gmail_msg">
MAXPY isn't really a BLAS 1 since it can reuse some data in certain vectors.<br class="gmail_msg">
<br class="gmail_msg">
<br class="gmail_msg">
> On Apr 4, 2017, at 10:25 AM, Filippo Leonardi <<a href="mailto:filippo.leon@gmail.com" class="gmail_msg" target="_blank">filippo.leon@gmail.com</a>> wrote:<br class="gmail_msg">
><br class="gmail_msg">
> I really appreciate the feedback. Thanks.<br class="gmail_msg">
><br class="gmail_msg">
> That of deadlock, when the order of destruction is not preserved, is a point I hadn't thought of. Maybe it can be cleverly addressed.<br class="gmail_msg">
><br class="gmail_msg">
> PS: If you are interested, I ran some benchmark on BLAS1 stuff and, for a single processor, I obtain:<br class="gmail_msg">
><br class="gmail_msg">
> Example for MAXPY, with expression templates:<br class="gmail_msg">
> BM_Vector_petscxx_MAXPY/8 38 ns 38 ns 18369805<br class="gmail_msg">
> BM_Vector_petscxx_MAXPY/64 622 ns 622 ns 1364335<br class="gmail_msg">
> BM_Vector_petscxx_MAXPY/512 281 ns 281 ns 2477718<br class="gmail_msg">
> BM_Vector_petscxx_MAXPY/4096 2046 ns 2046 ns 349954<br class="gmail_msg">
> BM_Vector_petscxx_MAXPY/32768 18012 ns 18012 ns 38788<br class="gmail_msg">
> BM_Vector_petscxx_MAXPY_BigO 0.55 N 0.55 N<br class="gmail_msg">
> BM_Vector_petscxx_MAXPY_RMS 7 % 7 %<br class="gmail_msg">
> Direct call to MAXPY:<br class="gmail_msg">
> BM_Vector_PETSc_MAXPY/8 33 ns 33 ns 20973674<br class="gmail_msg">
> BM_Vector_PETSc_MAXPY/64 116 ns 116 ns 5992878<br class="gmail_msg">
> BM_Vector_PETSc_MAXPY/512 731 ns 731 ns 963340<br class="gmail_msg">
> BM_Vector_PETSc_MAXPY/4096 5739 ns 5739 ns 122414<br class="gmail_msg">
> BM_Vector_PETSc_MAXPY/32768 46346 ns 46346 ns 15312<br class="gmail_msg">
> BM_Vector_PETSc_MAXPY_BigO 1.41 N 1.41 N<br class="gmail_msg">
> BM_Vector_PETSc_MAXPY_RMS 0 % 0 %<br class="gmail_msg">
><br class="gmail_msg">
> And 3x speedup on 2 MPI ranks (not much communication here, anyway). I am now convinced that this warrants some further investigation/testing.<br class="gmail_msg">
><br class="gmail_msg">
><br class="gmail_msg">
> On Tue, 4 Apr 2017 at 01:08 Jed Brown <<a href="mailto:jed@jedbrown.org" class="gmail_msg" target="_blank">jed@jedbrown.org</a>> wrote:<br class="gmail_msg">
> Matthew Knepley <<a href="mailto:knepley@gmail.com" class="gmail_msg" target="_blank">knepley@gmail.com</a>> writes:<br class="gmail_msg">
><br class="gmail_msg">
> >> BLAS. (Here a interesting point opens: I assume an efficient BLAS<br class="gmail_msg">
> >><br class="gmail_msg">
> >> implementation, but I am not so sure about how the different BLAS do<br class="gmail_msg">
> >> things<br class="gmail_msg">
> >><br class="gmail_msg">
> >> internally. I work from the assumption that we have a very well tuned BLAS<br class="gmail_msg">
> >><br class="gmail_msg">
> >> implementation at our disposal).<br class="gmail_msg">
> >><br class="gmail_msg">
> ><br class="gmail_msg">
> > The speed improvement comes from pulling vectors through memory fewer<br class="gmail_msg">
> > times by merging operations (kernel fusion).<br class="gmail_msg">
><br class="gmail_msg">
> Typical examples are VecMAXPY and VecMDot, but note that these are not<br class="gmail_msg">
> xGEMV because the vectors are independent arrays rather than single<br class="gmail_msg">
> arrays with a constant leading dimension.<br class="gmail_msg">
><br class="gmail_msg">
> >> call VecGetArray. However I will inevitably foget to return the array to<br class="gmail_msg">
> >><br class="gmail_msg">
> >> PETSc. I could have my new VecArray returning an object that restores the<br class="gmail_msg">
> >><br class="gmail_msg">
> >> array<br class="gmail_msg">
> >><br class="gmail_msg">
> >> when it goes out of scope. I can also flag the function with [[nodiscard]]<br class="gmail_msg">
> >> to<br class="gmail_msg">
> >><br class="gmail_msg">
> >> prevent the user to destroy the returned object from the start.<br class="gmail_msg">
> >><br class="gmail_msg">
> ><br class="gmail_msg">
> > Jed claims that this pattern is no longer preferred, but I have forgotten<br class="gmail_msg">
> > his argument.<br class="gmail_msg">
> > Jed?<br class="gmail_msg">
><br class="gmail_msg">
> Destruction order matters and needs to be collective. If an error<br class="gmail_msg">
> condition causes destruction to occur in a different order on different<br class="gmail_msg">
> processes, you can get deadlock. I would much rather have an error<br class="gmail_msg">
> leave some resources (for the OS to collect) than escalate into<br class="gmail_msg">
> deadlock.<br class="gmail_msg">
><br class="gmail_msg">
> > We have had this discussion for years on this list. Having separate names<br class="gmail_msg">
> > for each type<br class="gmail_msg">
> > is really ugly and does not achieve what we want. We want smooth<br class="gmail_msg">
> > interoperability between<br class="gmail_msg">
> > objects with different backing types, but it is still not clear how to do<br class="gmail_msg">
> > this.<br class="gmail_msg">
><br class="gmail_msg">
> Hide it internally and implicitly promote. Only the *GetArray functions<br class="gmail_msg">
> need to be parametrized on numeric type. But it's a lot of work on the<br class="gmail_msg">
> backend.<br class="gmail_msg">
<br class="gmail_msg">
</blockquote></div></div>