<div dir="ltr">We currently do not have a transfer to host setup for cusparse. I have a preliminary version here <a href="https://gitlab.com/petsc/petsc/-/tree/stefanozampini/feature-mataij-create-fromcoo">https://gitlab.com/petsc/petsc/-/tree/stefanozampini/feature-mataij-create-fromcoo</a><div><br></div><div>Should be ready in a couple of days for review.</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">Il giorno mar 20 ott 2020 alle ore 14:37 Matthew Knepley <<a href="mailto:knepley@gmail.com">knepley@gmail.com</a>> ha scritto:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr">On Tue, Oct 20, 2020 at 4:40 AM Héctor Barreiro Cabrera <<a href="mailto:hecbarcab@gmail.com" target="_blank">hecbarcab@gmail.com</a>> wrote:<br></div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr" class="gmail_attr">El jue., 15 oct. 2020 a las 23:32, Barry Smith (<<a href="mailto:bsmith@petsc.dev" target="_blank">bsmith@petsc.dev</a>>) escribió:<br></div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><div><br></div> We still have the assumption the AIJ matrix always has a copy on the GPU. How did you fill up the matrix on the GPU while not having its copy on the CPU?<div><br></div></div></blockquote><div>My strategy here was to initialize the structure on the CPU with dummy values to have the corresponding device arrays allocated. Ideally I would have initialized the structure on a kernel as well, since my intention is to keep all data on the GPU (and not hit host memory other than for debugging). But since the topology of my problem remains constant over time, this approach proved to be sufficient. I did not find any problem with my use case so far.</div><div><br></div><div>One thing I couldn't figure out, though, is how to force PETSc to transfer the data back to host. MatView always displays the dummy values I used for initialization. Is there a function to do this?</div></div></div></blockquote><div><br></div><div>Hmm, this should happen automatically, so we have missed something. How do you change the values on the device?</div><div><br></div><div> Thanks,</div><div><br></div><div> Matt</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div>Thanks for the replies, by the way! I'm quite surprised how responsive the PETSc community is! :)</div><div><br></div><div>Cheers,</div><div>Héctor<br></div><div> <br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><div></div><div> Barry</div><div><br></div><div> When we remove this assumption we have to add a bunch more code for CPU only things to make sure they properly get the data from the GPU.</div><div><br><div><br><blockquote type="cite"><div>On Oct 15, 2020, at 4:16 AM, Héctor Barreiro Cabrera <<a href="mailto:hecbarcab@gmail.com" target="_blank">hecbarcab@gmail.com</a>> wrote:</div><br><div><div dir="ltr">Hello fellow PETSc users,<div><br></div><div>Following up <a href="https://lists.mcs.anl.gov/pipermail/petsc-users/2020-September/042511.html" target="_blank">my previous email</a>, I managed to feed the entry data to a SeqAICUSPARSE matrix through a CUDA kernel using the new MatCUSPARSEGetDeviceMatWrite function (thanks Barry Smith and Mark Adams!). However, I am now facing problems when trying to use this matrix within a SNES solver with the Eisenstat-Walker method enabled.</div><div><br></div><div>According to PETSc's error log, the preconditioner is failing to invert the matrix diagonal. Specifically it says that:</div><div><font face="monospace">[0]PETSC ERROR: Arguments are incompatible<br>[0]PETSC ERROR: Zero diagonal on row 0<br></font><div><font face="monospace">[0]PETSC ERROR: Configure options PETSC_ARCH=win64_vs2019_release --with-cc="win32fe cl" --with-cxx="win32fe cl" --with-clanguage=C++ --with-fc=0 --with-mpi=0 --with-cuda=1 --with-cudac="win32fe nvcc" --with-cuda-dir=~/cuda --download-f2cblaslapack=1 --with-precision=single --with-64-bit-indices=0 --with-single-library=1 --with-endian=little --with-debugging=0 --with-x=0 --with-windows-graphics=0 --with-shared-libraries=1 --CUDAOPTFLAGS=-O2</font></div><div></div></div><div><br></div><div>The stack trace leads to the diagonal inversion routine:<br><font face="monospace">[0]PETSC ERROR: #1 MatInvertDiagonal_SeqAIJ() line 1913 in C:\cygwin64\home\HBARRE~1\PETSC-~1\src\mat\impls\aij\seq\aij.c<br>[0]PETSC ERROR: #2 MatSOR_SeqAIJ() line 1944 in C:\cygwin64\home\HBARRE~1\PETSC-~1\src\mat\impls\aij\seq\aij.c<br>[0]PETSC ERROR: #3 MatSOR() line 4005 in C:\cygwin64\home\HBARRE~1\PETSC-~1\src\mat\INTERF~1\matrix.c<br>[0]PETSC ERROR: #4 PCPreSolve_Eisenstat() line 79 in C:\cygwin64\home\HBARRE~1\PETSC-~1\src\ksp\pc\impls\eisens\eisen.c<br>[0]PETSC ERROR: #5 PCPreSolve() line 1549 in C:\cygwin64\home\HBARRE~1\PETSC-~1\src\ksp\pc\INTERF~1\precon.c<br>[0]PETSC ERROR: #6 KSPSolve_Private() line 686 in C:\cygwin64\home\HBARRE~1\PETSC-~1\src\ksp\ksp\INTERF~1\itfunc.c<br>[0]PETSC ERROR: #7 KSPSolve() line 889 in C:\cygwin64\home\HBARRE~1\PETSC-~1\src\ksp\ksp\INTERF~1\itfunc.c<br>[0]PETSC ERROR: #8 SNESSolve_NEWTONLS() line 225 in C:\cygwin64\home\HBARRE~1\PETSC-~1\src\snes\impls\ls\ls.c<br>[0]PETSC ERROR: #9 SNESSolve() line 4567 in C:\cygwin64\home\HBARRE~1\PETSC-~1\src\snes\INTERF~1\snes.c</font><br></div><div><font face="monospace"><br></font></div>I am 100% positive that the diagonal does not contain a zero entry, so my suspicions are either that this operation is not supported on the GPU at all (MatInvertDiagonal_SeqAIJ seems to access host-side memory) or that I am missing some setting to make this work on the GPU. Is this correct?<div><br></div><div>Thanks!</div><div><br></div><div>Cheers,</div><div>Héctor</div></div>
</div></blockquote></div><br></div></div></blockquote></div></div>
</blockquote></div><br clear="all"><div><br></div>-- <br><div dir="ltr"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener</div><div><br></div><div><a href="http://www.cse.buffalo.edu/~knepley/" target="_blank">https://www.cse.buffalo.edu/~knepley/</a><br></div></div></div></div></div></div></div></div>
</blockquote></div><br clear="all"><div><br></div>-- <br><div dir="ltr" class="gmail_signature">Stefano</div>