<div dir="ltr">Barry,<div><br></div><div>I think having a single code instead of three different, quasi similar, versions is less fragile ( I admit, once you get the logic correct...)</div><div>Also, it conforms with the standard for spmv that implements alpha * A * x + beta * b</div><div>The easiest fix is the following: </div><div><br></div><div>Rename MatMultAdd_ into MatMultKernel_Private and add an extra boolean to control the transpose operation</div><div>then, you can reuse the same complicated code I have wrote, just by selecting the proper cusparse object (matstructT or matstruct)</div><div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">Il giorno mer 10 lug 2019 alle ore 18:16 Smith, Barry F. <<a href="mailto:bsmith@mcs.anl.gov">bsmith@mcs.anl.gov</a>> ha scritto:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><br>
In the long run I would like to see smaller specialized chunks of code (with a bit of duplication between them) instead of highly overloaded routines like MatMultAdd_AIJCUSPARSE. Better 3 routines, for multiple alone, for multiple add alone and for multiple add with sparse format. Trying to get all the cases right (performance and correctness for the everything at once is unnecessary and risky). Having possible zero size objects (and hence null pointers) doesn't help the complex logic<br>
<br>
<br>
Barry<br>
<br>
<br>
> On Jul 10, 2019, at 10:06 AM, Mark Adams <<a href="mailto:mfadams@lbl.gov" target="_blank">mfadams@lbl.gov</a>> wrote:<br>
> <br>
> Thanks, you made several changes here, including switches with the workvector size. I guess I should import this logic to the transpose method(s), except for the yy==NULL branches ...<br>
> <br>
> MatMult_ calls MatMultAdd with yy=0, but the transpose version have their own code. MatMultTranspose_SeqAIJCUSPARSE is very simple. <br>
> <br>
> Thanks again,<br>
> Mark<br>
> <br>
> On Wed, Jul 10, 2019 at 9:22 AM Stefano Zampini <<a href="mailto:stefano.zampini@gmail.com" target="_blank">stefano.zampini@gmail.com</a>> wrote:<br>
> Mark,<br>
> <br>
> if the difference is on lvec, I suspect the bug has to do with compressed row storage. I have fixed a similar bug in MatMult.<br>
> you want to check cusparsestruct->workVector->size() against A->cmap->n.<br>
> <br>
> Stefano <br>
> <br>
> Il giorno mer 10 lug 2019 alle ore 15:54 Mark Adams via petsc-dev <<a href="mailto:petsc-dev@mcs.anl.gov" target="_blank">petsc-dev@mcs.anl.gov</a>> ha scritto:<br>
> <br>
> <br>
> On Wed, Jul 10, 2019 at 1:13 AM Smith, Barry F. <<a href="mailto:bsmith@mcs.anl.gov" target="_blank">bsmith@mcs.anl.gov</a>> wrote:<br>
> <br>
> ierr = VecGetLocalSize(xx,&nt);CHKERRQ(ierr);<br>
> if (nt != A->rmap->n) SETERRQ2(PETSC_COMM_SELF,PETSC_ERR_ARG_SIZ,"Incompatible partition of A (%D) and xx (%D)",A->rmap->n,nt);<br>
> ierr = VecScatterInitializeForGPU(a->Mvctx,xx);CHKERRQ(ierr);<br>
> ierr = (*a->B->ops->multtranspose)(a->B,xx,a->lvec);CHKERRQ(ierr);<br>
> <br>
> So the xx on the GPU appears ok?<br>
> <br>
> The norm is correct and ...<br>
> <br>
> The a->B appears ok?<br>
> <br>
> yes<br>
> <br>
> But on process 1 the result a->lvec is wrong? <br>
> <br>
> yes<br>
> <br>
> <br>
> How do you look at the a->lvec? Do you copy it to the CPU and print it?<br>
> <br>
> I use Vec[Mat]ViewFromOptions. Oh, that has not been implemented so I should copy it. Maybe I should make a CUDA version of these methods?<br>
> <br>
> <br>
> ierr = (*a->A->ops->multtranspose)(a->A,xx,yy);CHKERRQ(ierr);<br>
> ierr = VecScatterBegin(a->Mvctx,a->lvec,yy,ADD_VALUES,SCATTER_REVERSE);CHKERRQ(ierr);<br>
> ierr = VecScatterEnd(a->Mvctx,a->lvec,yy,ADD_VALUES,SCATTER_REVERSE);CHKERRQ(ierr);<br>
> ierr = VecScatterFinalizeForGPU(a->Mvctx);CHKERRQ(ierr);<br>
> <br>
> Digging around in MatMultTranspose_SeqAIJCUSPARSE doesn't help? <br>
> <br>
> This is where I have been digging around an printing stuff.<br>
> <br>
> <br>
> Are you sure the problem isn't related to the "stream business"? <br>
> <br>
> I don't know what that is but I have played around with adding cudaDeviceSynchronize<br>
> <br>
> <br>
> /* This multiplication sequence is different sequence<br>
> than the CPU version. In particular, the diagonal block<br>
> multiplication kernel is launched in one stream. Then,<br>
> in a separate stream, the data transfers from DeviceToHost<br>
> (with MPI messaging in between), then HostToDevice are<br>
> launched. Once the data transfer stream is synchronized,<br>
> to ensure messaging is complete, the MatMultAdd kernel<br>
> is launched in the original (MatMult) stream to protect<br>
> against race conditions.<br>
> <br>
> This sequence should only be called for GPU computation. */<br>
> <br>
> Note this comment isn't right and appears to be cut and paste from somewhere else, since there is no MatMult() nor MatMultAdd kernel here?<br>
> <br>
> Yes, I noticed this. Same as MatMult and not correct here.<br>
> <br>
> <br>
> Anyway to "turn off the stream business" and see if the result is then correct? <br>
> <br>
> How do you do that? I'm looking at docs on streams but not sure how its used here.<br>
> <br>
> Perhaps the stream business was done correctly for MatMult() but was never right for MatMultTranspose()?<br>
> <br>
> Barry<br>
> <br>
> BTW: Unrelated comment, the code<br>
> <br>
> ierr = VecSet(yy,0);CHKERRQ(ierr);<br>
> ierr = VecCUDAGetArrayWrite(yy,&yarray);CHKERRQ(ierr);<br>
> <br>
> has an unneeded ierr = VecSet(yy,0);CHKERRQ(ierr); here. VecCUDAGetArrayWrite() requires that you ignore the values in yy and set them all yourself so setting them to zero before calling VecCUDAGetArrayWrite() does nothing except waste time.<br>
> <br>
> <br>
> OK, I'll get rid of it.<br>
> <br>
> <br>
> > On Jul 9, 2019, at 3:16 PM, Mark Adams via petsc-dev <<a href="mailto:petsc-dev@mcs.anl.gov" target="_blank">petsc-dev@mcs.anl.gov</a>> wrote:<br>
> > <br>
> > I am stumped with this GPU bug(s). Maybe someone has an idea.<br>
> > <br>
> > I did find a bug in the cuda transpose mat-vec that cuda-memcheck detected, but I still have differences between the GPU and CPU transpose mat-vec. I've got it down to a very simple test: bicg/none on a tiny mesh with two processors. It works on one processor or with cg/none. So it is the transpose mat-vec.<br>
> > <br>
> > I see that the result of the off-diagonal (a->lvec) is different only proc 1. I instrumented MatMultTranspose_MPIAIJ[CUSPARSE] with norms of mat and vec and printed out matlab vectors. Below is the CPU output and then the GPU with a view of the scatter object, which is identical as you can see.<br>
> > <br>
> > The matlab B matrix and xx vector are identical. Maybe the GPU copy is wrong ...<br>
> > <br>
> > The only/first difference between CPU and GPU is a->lvec (the off diagonal contribution)on processor 1. (you can see the norms are different). Here is the diff on the process 1 a->lvec vector (all values are off).<br>
> > <br>
> > Any thoughts would be appreciated,<br>
> > Mark<br>
> > <br>
> > 15:30 1 /gpfs/alpine/scratch/adams/geo127$ diff lvgpu.m lvcpu.m<br>
> > 2,12c2,12<br>
> > < % type: seqcuda<br>
> > < Vec_0x53738630_0 = [<br>
> > < 9.5702137431412879e+00<br>
> > < 2.1970298791152253e+01<br>
> > < 4.5422290209190646e+00<br>
> > < 2.0185031807270226e+00<br>
> > < 4.2627312508573375e+01<br>
> > < 1.0889191983882025e+01<br>
> > < 1.6038202417695462e+01<br>
> > < 2.7155672033607665e+01<br>
> > < 6.2540357853223556e+00<br>
> > ---<br>
> > > % type: seq<br>
> > > Vec_0x3a546440_0 = [<br>
> > > 4.5565851251714653e+00<br>
> > > 1.0460532998971189e+01<br>
> > > 2.1626531807270220e+00<br>
> > > 9.6105288923182408e-01<br>
> > > 2.0295782656035659e+01<br>
> > > 5.1845791066529463e+00<br>
> > > 7.6361340020576058e+00<br>
> > > 1.2929401011659799e+01<br>
> > > 2.9776812928669392e+00<br>
> > <br>
> > 15:15 130 /gpfs/alpine/scratch/adams/geo127$ jsrun -n 1 -c 2 -a 2 -g 1 ./ex56 -cells 2,2,1 <br>
> > [0] 27 global equations, 9 vertices<br>
> > [0] 27 equations in vector, 9 vertices<br>
> > 0 SNES Function norm 1.223958326481e+02 <br>
> > 0 KSP Residual norm 1.223958326481e+02 <br>
> > [0] |x|= 1.223958326481e+02 |a->lvec|= 1.773965489475e+01 |B|= 1.424708937136e+00<br>
> > [1] |x|= 1.223958326481e+02 |a->lvec|= 2.844171413778e+01 |B|= 1.424708937136e+00<br>
> > [1] 1) |yy|= 2.007423334680e+02<br>
> > [0] 1) |yy|= 2.007423334680e+02<br>
> > [0] 2) |yy|= 1.957605719265e+02<br>
> > [1] 2) |yy|= 1.957605719265e+02<br>
> > [1] Number sends = 1; Number to self = 0<br>
> > [1] 0 length = 9 to whom 0<br>
> > Now the indices for all remote sends (in order by process sent to)<br>
> > [1] 9 <br>
> > [1] 10 <br>
> > [1] 11 <br>
> > [1] 12 <br>
> > [1] 13 <br>
> > [1] 14 <br>
> > [1] 15 <br>
> > [1] 16 <br>
> > [1] 17 <br>
> > [1] Number receives = 1; Number from self = 0<br>
> > [1] 0 length 9 from whom 0<br>
> > Now the indices for all remote receives (in order by process received from)<br>
> > [1] 0 <br>
> > [1] 1 <br>
> > [1] 2 <br>
> > [1] 3 <br>
> > [1] 4 <br>
> > [1] 5 <br>
> > [1] 6 <br>
> > [1] 7 <br>
> > [1] 8 <br>
> > 1 KSP Residual norm 8.199932342150e+01 <br>
> > Linear solve did not converge due to DIVERGED_ITS iterations 1<br>
> > Nonlinear solve did not converge due to DIVERGED_LINEAR_SOLVE iterations 0<br>
> > <br>
> > <br>
> > 15:19 /gpfs/alpine/scratch/adams/geo127$ jsrun -n 1 -c 2 -a 2 -g 1 ./ex56 -cells 2,2,1 -ex56_dm_mat_type aijcusparse -ex56_dm_vec_type cuda<br>
> > [0] 27 global equations, 9 vertices<br>
> > [0] 27 equations in vector, 9 vertices<br>
> > 0 SNES Function norm 1.223958326481e+02 <br>
> > 0 KSP Residual norm 1.223958326481e+02 <br>
> > [0] |x|= 1.223958326481e+02 |a->lvec|= 1.773965489475e+01 |B|= 1.424708937136e+00<br>
> > [1] |x|= 1.223958326481e+02 |a->lvec|= 5.973624458725e+01 |B|= 1.424708937136e+00<br>
> > [0] 1) |yy|= 2.007423334680e+02<br>
> > [1] 1) |yy|= 2.007423334680e+02<br>
> > [0] 2) |yy|= 1.953571867633e+02<br>
> > [1] 2) |yy|= 1.953571867633e+02<br>
> > [1] Number sends = 1; Number to self = 0<br>
> > [1] 0 length = 9 to whom 0<br>
> > Now the indices for all remote sends (in order by process sent to)<br>
> > [1] 9 <br>
> > [1] 10 <br>
> > [1] 11 <br>
> > [1] 12 <br>
> > [1] 13 <br>
> > [1] 14 <br>
> > [1] 15 <br>
> > [1] 16 <br>
> > [1] 17 <br>
> > [1] Number receives = 1; Number from self = 0<br>
> > [1] 0 length 9 from whom 0<br>
> > Now the indices for all remote receives (in order by process received from)<br>
> > [1] 0 <br>
> > [1] 1 <br>
> > [1] 2 <br>
> > [1] 3 <br>
> > [1] 4 <br>
> > [1] 5 <br>
> > [1] 6 <br>
> > [1] 7 <br>
> > [1] 8 <br>
> > 1 KSP Residual norm 8.199932342150e+01 <br>
> > <br>
> <br>
> <br>
> <br>
> -- <br>
> Stefano<br>
<br>
</blockquote></div><br clear="all"><div><br></div>-- <br><div dir="ltr" class="gmail_signature">Stefano</div>