[petsc-users] Cuda libraries and DMDA

Matthew Knepley knepley at gmail.com
Mon Apr 9 18:45:09 CDT 2018


On Mon, Apr 9, 2018 at 7:27 PM, Manuel Valera <mvalera-w at sdsu.edu> wrote:

> On Mon, Apr 9, 2018 at 4:09 PM, Matthew Knepley <knepley at gmail.com> wrote:
>
>> On Mon, Apr 9, 2018 at 6:12 PM, Manuel Valera <mvalera-w at sdsu.edu> wrote:
>>
>>> Hello guys,
>>>
>>> I've made advances in my CUDA acceleration project, as you remember i
>>> have a CFD model in need of better execution times.
>>>
>>> So far i have been able to solve the pressure system in the GPU and the
>>> rest in serial, using PETSc only for this pressure solve, the library i got
>>> to work was ViennaCL. First question, do i still have to switch
>>> installations to use either CUDA library? this was a suggestion before, so
>>> in order to use CUSP instead of ViennaCL, for example, i currently have to
>>> change installations, is this still the case?
>>>
>>
>> I am not sure what that means exactly. However, you can build a PETSc
>> with CUDA and ViennaCL support. The type of Vec/Mat is selected at runtime.
>>
>
> Karl Rupp wrote in a previous email:
>
> * * Right now only one of {native CUDA, CUSP, ViennaCL} can be activated
> at configure time. This will be fixed later this month.*
>
> I was asking if this was already solved in 3.9,
>

Karl knows better than I do. I thought that was fixed, but maybe not in
this release.


>
>
>>
>>
>>> Now, i started working in a fully parallelized version of the model,
>>> which uses the DMs and DMDAs to distribute the arrays, if i try the same
>>> flags as before i get an error saying "Currently only handles ViennaCL
>>> matrices" when trying to solve for pressure, i get this is a feature still
>>> not implemented? what options do i have to solve pressure, or assign a DMDA
>>> array update to be done specifically in a GPU device?
>>>
>>
>> If we can't see the error, we are just guessing. Please send the entire
>> error message.
>>
>
> Got it, I will paste the error at the end of this email
>

It is asking for a ViennaCL matrix. You must tell the DM to create one:

  -dm_mat_type aijviennacl


>
>
>>
>> Note, we only do linear algebra on the GPU, so none of the
>> FormFunction/FormJacobian stuff for DMDA would be on the GPU.
>>
>
> Yes, we only use it for linear algebra, e.g. solving a linear system and
> updating an array with a problematic algorithm.
>
>
>
>>
>>
>>> I was thinking of using the VecScatterCreateToZero for a regular vector,
>>>
>>
>> Why do you want a serial vector?
>>
>
> Because it looks live ViennaCL doesn't handle arrays created with DMDAVec,
> it was just an idea
>

No, it just needs the right type.


>
>
>> but then i would have to create a vector and copy the DMDAVec into it,
>>>
>>
>> I do not understand what it means to copy the DM into the Vec.
>>
>
> I meant copying a DMDAVec into a Vec object, the first is created with a
> DMDA object for it's mapping across processors,
>

There is no such thing as a DMDAVec. Everything is just a Vec.

  Thanks,

    Matt


>
>>   Thanks,
>>
>>      Matt
>>
>>
>>> is this accomplished with DMDAVecGetArrayReadF90 and then just copy? do
>>> you think this will generate too much overhead?
>>>
>>> Thanks so much for your input,
>>>
>>> Manuel
>>>
>>
>>
>>
>> --
>> What most experimenters take for granted before they begin their
>> experiments is infinitely more interesting than any results to which their
>> experiments lead.
>> -- Norbert Wiener
>>
>> https://www.cse.buffalo.edu/~knepley/ <http://www.caam.rice.edu/~mk51/>
>>
>
> The error happens when trying to use KSPSolve() for a vector made with
> DMDAVec routines, the matrix is created without any DMDA routines
>
> Error:
>
> [0]PETSC ERROR: --------------------- Error Message
> --------------------------------------------------------------
> [0]PETSC ERROR: No support for this operation for this object type
> [0]PETSC ERROR: Currently only handles ViennaCL matrices
> [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html
> for trouble shooting.
> [0]PETSC ERROR: Petsc Development GIT revision: v3.8.4-2418-gd9c423b  GIT
> Date: 2018-04-02 11:59:41 +0200
> [0]PETSC ERROR: ./gcmBEAM on a cuda named node50 by valera Mon Apr  9
> 16:24:26 2018
> [0]PETSC ERROR: Configure options PETSC_ARCH=cuda --download-mpich
> --download-fblaslapack COPTFLAGS=-O2 CXXOPTFLAGS=-O2 FOPTFLAGS=-O2
> --with-shared-libraries=1 --download-hypre --with-debugging=no
> --with-cuda=1 --CUDAFLAGS=-arch=sm_60 --download-hypre --download-viennacl
> --download-cusp
> [0]PETSC ERROR: #1 PCSetUp_SAVIENNACL() line 47 in
> /home/valera/petsc/src/ksp/pc/impls/saviennaclcuda/saviennacl.cu
> [0]PETSC ERROR: #2 PCSetUp() line 924 in /home/valera/petsc/src/ksp/pc/
> interface/precon.c
> [0]PETSC ERROR: #3 KSPSetUp() line 381 in /home/valera/petsc/src/ksp/
> ksp/interface/itfunc.c
> [0]PETSC ERROR: --------------------- Error Message
> --------------------------------------------------------------
> [0]PETSC ERROR: No support for this operation for this object type
> [0]PETSC ERROR: Currently only handles ViennaCL matrices
> [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html
> for trouble shooting.
> [0]PETSC ERROR: Petsc Development GIT revision: v3.8.4-2418-gd9c423b  GIT
> Date: 2018-04-02 11:59:41 +0200
> [0]PETSC ERROR: ./gcmBEAM on a cuda named node50 by valera Mon Apr  9
> 16:24:26 2018
> [0]PETSC ERROR: Configure options PETSC_ARCH=cuda --download-mpich
> --download-fblaslapack COPTFLAGS=-O2 CXXOPTFLAGS=-O2 FOPTFLAGS=-O2
> --with-shared-libraries=1 --download-hypre --with-debugging=no
> --with-cuda=1 --CUDAFLAGS=-arch=sm_60 --download-hypre --download-viennacl
> --download-cusp
> [0]PETSC ERROR: #4 PCSetUp_SAVIENNACL() line 47 in
> /home/valera/petsc/src/ksp/pc/impls/saviennaclcuda/saviennacl.cu
> [0]PETSC ERROR: #5 PCSetUp() line 924 in /home/valera/petsc/src/ksp/pc/
> interface/precon.c
>  Finished setting up matrix objects
>  Exiting PrepareNetCDF
> [0]PETSC ERROR: ------------------------------
> ------------------------------------------
> [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation,
> probably memory access out of range
> [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger
> [0]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/
> documentation/faq.html#valgrind
> [0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS
> X to find memory corruption errors
> [0]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and
> run
> [0]PETSC ERROR: to get more information on the crash.
> [0]PETSC ERROR: --------------------- Error Message
> --------------------------------------------------------------
> [0]PETSC ERROR: Signal received
> [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html
> for trouble shooting.
> [0]PETSC ERROR: Petsc Development GIT revision: v3.8.4-2418-gd9c423b  GIT
> Date: 2018-04-02 11:59:41 +0200
> [0]PETSC ERROR: ./gcmBEAM on a cuda named node50 by valera Mon Apr  9
> 16:24:26 2018
> [0]PETSC ERROR: Configure options PETSC_ARCH=cuda --download-mpich
> --download-fblaslapack COPTFLAGS=-O2 CXXOPTFLAGS=-O2 FOPTFLAGS=-O2
> --with-shared-libraries=1 --download-hypre --with-debugging=no
> --with-cuda=1 --CUDAFLAGS=-arch=sm_60 --download-hypre --download-viennacl
> --download-cusp
> [0]PETSC ERROR: #6 User provided function() line 0 in  unknown file
> application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0
> [unset]: write_line error; fd=-1 buf=:cmd=abort exitcode=59
>
>
>
>
>
>
>
>
>
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ <http://www.caam.rice.edu/~mk51/>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20180409/1b5d3503/attachment-0001.html>


More information about the petsc-users mailing list