<div dir="ltr">Mark,<div><br></div><div>if the difference is on lvec, I suspect the bug has to do with compressed row storage. I have fixed a similar bug in MatMult.</div><div>you want to check cusparsestruct->workVector->size() against A->cmap->n.<br></div><div><br></div><div>Stefano </div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">Il giorno mer 10 lug 2019 alle ore 15:54 Mark Adams via petsc-dev <<a href="mailto:petsc-dev@mcs.anl.gov">petsc-dev@mcs.anl.gov</a>> ha scritto:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, Jul 10, 2019 at 1:13 AM Smith, Barry F. <<a href="mailto:bsmith@mcs.anl.gov" target="_blank">bsmith@mcs.anl.gov</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><br>
ierr = VecGetLocalSize(xx,&nt);CHKERRQ(ierr);<br>
if (nt != A->rmap->n) SETERRQ2(PETSC_COMM_SELF,PETSC_ERR_ARG_SIZ,"Incompatible partition of A (%D) and xx (%D)",A->rmap->n,nt);<br>
ierr = VecScatterInitializeForGPU(a->Mvctx,xx);CHKERRQ(ierr);<br>
ierr = (*a->B->ops->multtranspose)(a->B,xx,a->lvec);CHKERRQ(ierr);<br>
<br>
So the xx on the GPU appears ok? </blockquote><div><br></div><div>The norm is correct and ...</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">The a->B appears ok? </blockquote><div><br></div><div>yes</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">But on process 1 the result a->lvec is wrong? <br></blockquote><div><br></div><div>yes</div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
How do you look at the a->lvec? Do you copy it to the CPU and print it?<br></blockquote><div><br></div><div>I use Vec[Mat]ViewFromOptions. Oh, that has not been implemented so I should copy it. Maybe I should make a CUDA version of these methods?</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
ierr = (*a->A->ops->multtranspose)(a->A,xx,yy);CHKERRQ(ierr);<br>
ierr = VecScatterBegin(a->Mvctx,a->lvec,yy,ADD_VALUES,SCATTER_REVERSE);CHKERRQ(ierr);<br>
ierr = VecScatterEnd(a->Mvctx,a->lvec,yy,ADD_VALUES,SCATTER_REVERSE);CHKERRQ(ierr);<br>
ierr = VecScatterFinalizeForGPU(a->Mvctx);CHKERRQ(ierr);<br>
<br>
Digging around in MatMultTranspose_SeqAIJCUSPARSE doesn't help? </blockquote><div><br></div><div>This is where I have been digging around an printing stuff.</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
Are you sure the problem isn't related to the "stream business"? <br></blockquote><div><br></div><div>I don't know what that is but I have played around with adding cudaDeviceSynchronize</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
/* This multiplication sequence is different sequence<br>
than the CPU version. In particular, the diagonal block<br>
multiplication kernel is launched in one stream. Then,<br>
in a separate stream, the data transfers from DeviceToHost<br>
(with MPI messaging in between), then HostToDevice are<br>
launched. Once the data transfer stream is synchronized,<br>
to ensure messaging is complete, the MatMultAdd kernel<br>
is launched in the original (MatMult) stream to protect<br>
against race conditions.<br>
<br>
This sequence should only be called for GPU computation. */<br>
<br>
Note this comment isn't right and appears to be cut and paste from somewhere else, since there is no MatMult() nor MatMultAdd kernel here?<br></blockquote><div><br></div><div>Yes, I noticed this. Same as MatMult and not correct here.</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
Anyway to "turn off the stream business" and see if the result is then correct? </blockquote><div><br></div><div>How do you do that? I'm looking at docs on streams but not sure how its used here.</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Perhaps the stream business was done correctly for MatMult() but was never right for MatMultTranspose()?<br>
<br>
Barry<br>
<br>
BTW: Unrelated comment, the code<br>
<br>
ierr = VecSet(yy,0);CHKERRQ(ierr);<br>
ierr = VecCUDAGetArrayWrite(yy,&yarray);CHKERRQ(ierr);<br>
<br>
has an unneeded ierr = VecSet(yy,0);CHKERRQ(ierr); here. VecCUDAGetArrayWrite() requires that you ignore the values in yy and set them all yourself so setting them to zero before calling VecCUDAGetArrayWrite() does nothing except waste time.<br>
<br></blockquote><div><br></div><div>OK, I'll get rid of it.</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
> On Jul 9, 2019, at 3:16 PM, Mark Adams via petsc-dev <<a href="mailto:petsc-dev@mcs.anl.gov" target="_blank">petsc-dev@mcs.anl.gov</a>> wrote:<br>
> <br>
> I am stumped with this GPU bug(s). Maybe someone has an idea.<br>
> <br>
> I did find a bug in the cuda transpose mat-vec that cuda-memcheck detected, but I still have differences between the GPU and CPU transpose mat-vec. I've got it down to a very simple test: bicg/none on a tiny mesh with two processors. It works on one processor or with cg/none. So it is the transpose mat-vec.<br>
> <br>
> I see that the result of the off-diagonal (a->lvec) is different only proc 1. I instrumented MatMultTranspose_MPIAIJ[CUSPARSE] with norms of mat and vec and printed out matlab vectors. Below is the CPU output and then the GPU with a view of the scatter object, which is identical as you can see.<br>
> <br>
> The matlab B matrix and xx vector are identical. Maybe the GPU copy is wrong ...<br>
> <br>
> The only/first difference between CPU and GPU is a->lvec (the off diagonal contribution)on processor 1. (you can see the norms are different). Here is the diff on the process 1 a->lvec vector (all values are off).<br>
> <br>
> Any thoughts would be appreciated,<br>
> Mark<br>
> <br>
> 15:30 1 /gpfs/alpine/scratch/adams/geo127$ diff lvgpu.m lvcpu.m<br>
> 2,12c2,12<br>
> < % type: seqcuda<br>
> < Vec_0x53738630_0 = [<br>
> < 9.5702137431412879e+00<br>
> < 2.1970298791152253e+01<br>
> < 4.5422290209190646e+00<br>
> < 2.0185031807270226e+00<br>
> < 4.2627312508573375e+01<br>
> < 1.0889191983882025e+01<br>
> < 1.6038202417695462e+01<br>
> < 2.7155672033607665e+01<br>
> < 6.2540357853223556e+00<br>
> ---<br>
> > % type: seq<br>
> > Vec_0x3a546440_0 = [<br>
> > 4.5565851251714653e+00<br>
> > 1.0460532998971189e+01<br>
> > 2.1626531807270220e+00<br>
> > 9.6105288923182408e-01<br>
> > 2.0295782656035659e+01<br>
> > 5.1845791066529463e+00<br>
> > 7.6361340020576058e+00<br>
> > 1.2929401011659799e+01<br>
> > 2.9776812928669392e+00<br>
> <br>
> 15:15 130 /gpfs/alpine/scratch/adams/geo127$ jsrun -n 1 -c 2 -a 2 -g 1 ./ex56 -cells 2,2,1 <br>
> [0] 27 global equations, 9 vertices<br>
> [0] 27 equations in vector, 9 vertices<br>
> 0 SNES Function norm 1.223958326481e+02 <br>
> 0 KSP Residual norm 1.223958326481e+02 <br>
> [0] |x|= 1.223958326481e+02 |a->lvec|= 1.773965489475e+01 |B|= 1.424708937136e+00<br>
> [1] |x|= 1.223958326481e+02 |a->lvec|= 2.844171413778e+01 |B|= 1.424708937136e+00<br>
> [1] 1) |yy|= 2.007423334680e+02<br>
> [0] 1) |yy|= 2.007423334680e+02<br>
> [0] 2) |yy|= 1.957605719265e+02<br>
> [1] 2) |yy|= 1.957605719265e+02<br>
> [1] Number sends = 1; Number to self = 0<br>
> [1] 0 length = 9 to whom 0<br>
> Now the indices for all remote sends (in order by process sent to)<br>
> [1] 9 <br>
> [1] 10 <br>
> [1] 11 <br>
> [1] 12 <br>
> [1] 13 <br>
> [1] 14 <br>
> [1] 15 <br>
> [1] 16 <br>
> [1] 17 <br>
> [1] Number receives = 1; Number from self = 0<br>
> [1] 0 length 9 from whom 0<br>
> Now the indices for all remote receives (in order by process received from)<br>
> [1] 0 <br>
> [1] 1 <br>
> [1] 2 <br>
> [1] 3 <br>
> [1] 4 <br>
> [1] 5 <br>
> [1] 6 <br>
> [1] 7 <br>
> [1] 8 <br>
> 1 KSP Residual norm 8.199932342150e+01 <br>
> Linear solve did not converge due to DIVERGED_ITS iterations 1<br>
> Nonlinear solve did not converge due to DIVERGED_LINEAR_SOLVE iterations 0<br>
> <br>
> <br>
> 15:19 /gpfs/alpine/scratch/adams/geo127$ jsrun -n 1 -c 2 -a 2 -g 1 ./ex56 -cells 2,2,1 -ex56_dm_mat_type aijcusparse -ex56_dm_vec_type cuda<br>
> [0] 27 global equations, 9 vertices<br>
> [0] 27 equations in vector, 9 vertices<br>
> 0 SNES Function norm 1.223958326481e+02 <br>
> 0 KSP Residual norm 1.223958326481e+02 <br>
> [0] |x|= 1.223958326481e+02 |a->lvec|= 1.773965489475e+01 |B|= 1.424708937136e+00<br>
> [1] |x|= 1.223958326481e+02 |a->lvec|= 5.973624458725e+01 |B|= 1.424708937136e+00<br>
> [0] 1) |yy|= 2.007423334680e+02<br>
> [1] 1) |yy|= 2.007423334680e+02<br>
> [0] 2) |yy|= 1.953571867633e+02<br>
> [1] 2) |yy|= 1.953571867633e+02<br>
> [1] Number sends = 1; Number to self = 0<br>
> [1] 0 length = 9 to whom 0<br>
> Now the indices for all remote sends (in order by process sent to)<br>
> [1] 9 <br>
> [1] 10 <br>
> [1] 11 <br>
> [1] 12 <br>
> [1] 13 <br>
> [1] 14 <br>
> [1] 15 <br>
> [1] 16 <br>
> [1] 17 <br>
> [1] Number receives = 1; Number from self = 0<br>
> [1] 0 length 9 from whom 0<br>
> Now the indices for all remote receives (in order by process received from)<br>
> [1] 0 <br>
> [1] 1 <br>
> [1] 2 <br>
> [1] 3 <br>
> [1] 4 <br>
> [1] 5 <br>
> [1] 6 <br>
> [1] 7 <br>
> [1] 8 <br>
> 1 KSP Residual norm 8.199932342150e+01 <br>
> <br>
<br>
</blockquote></div></div>
</blockquote></div><br clear="all"><div><br></div>-- <br><div dir="ltr" class="gmail_signature">Stefano</div>