[petsc-users] Problem with matrix and vector using GPU

Ramoni Z. Sedano Azevedo ramoni.zsedano at gmail.com
Thu Sep 21 08:22:02 CDT 2023


Using the following flags when calling the code

-A_mat_type aijcusparse\
  -P_mat_type aijcusparse \
  -dm_vec_type cuda\
  -use_gpu_aware_mpi 0 \

and adding the function
PetscCallA(DMSetFromOptions(da, ierr))

between functions
PetscCallA( DMDACreate3d(PETSC_COMM_WORLD, DM_BOUNDARY_NONE,
DM_BOUNDARY_NONE, DM_BOUNDARY_NONE, DMDA_STENCIL_BOX, l+1,m+1,nzn,
PETSC_DECIDE, PETSC_DECIDE, PETSC_DECIDE, i3,i1, PETSC_NULL_INTEGER,
PETSC_NULL_INTEGER, PETSC_NULL_INTEGER, da, ierr) )

PetscCallA( DMSetUp(da,ierr) )

It finally worked. Thanks!

Ramoni

Em qua., 20 de set. de 2023 às 14:47, Junchao Zhang <junchao.zhang at gmail.com>
escreveu:

> Since you create the matrices alone,  I think you should use
>   -A_mat_type aijcusparse \
>   -P_mat_type aijcusparse \
> instead of
>    -A_dm_mat_type aijcusparse \
>    -P_dm_mat_type aijcusparse \
>
> Perhaps you can remove "-dm_vec_type cuda" as dm mat type already
> determines dm vector type
>
> --Junchao Zhang
>
>
> On Wed, Sep 20, 2023 at 12:12 PM Ramoni Z. Sedano Azevedo <
> ramoni.zsedano at gmail.com> wrote:
>
>> Thanks for the tip. Using dm_mat_type and dm_vec_type the code runs.
>> ./${executable} \
>>  -A_dm_mat_type aijcusparse \
>>  -P_dm_mat_type aijcusparse \
>>  -dm_vec_type cuda \
>>  -use_gpu_aware_mpi 0 \
>>  -em_ksp_monitor_true_residual \
>>  -em_ksp_type bcgs \
>>  -em_pc_type bjacobi \
>>  -em_sub_pc_type ilu \
>>  -em_sub_pc_factor_levels 3 \
>>  -em_sub_pc_factor_fill 6 \
>>  < ./Parameters.inp \
>>
>> But at the end the message appears:
>> WARNING! There are options you set that were not used!
>> WARNING! could be spelling mistake, etc!
>> There are 3 unused database options. They are:
>> Option left: name:-A_dm_mat_type value: aijcusparse
>> Option left: name:-dm_vec_type value: cuda
>> Option left: name:-P_dm_mat_type value: aijcusparse
>>
>> Using nvprof does not include kernels, only API use.
>>
>> Ramoni Z. S. Azevedo
>>
>> Em qua., 20 de set. de 2023 às 12:31, Junchao Zhang <
>> junchao.zhang at gmail.com> escreveu:
>>
>>> Try to also add  *-dm_mat_type aijcusparse -dm_vec_type cuda*
>>>
>>> --Junchao Zhang
>>>
>>>
>>> On Wed, Sep 20, 2023 at 10:21 AM Ramoni Z. Sedano Azevedo <
>>> ramoni.zsedano at gmail.com> wrote:
>>>
>>>>
>>>> Hey!
>>>>
>>>> I am using PETSc in a Fortran code and we use MPI parallelization. We
>>>> would like to use GPU parallelization, but we are encountering an error.
>>>>
>>>> PETSc is configured as follows:
>>>> #!/bin/bash
>>>> ./configure \
>>>>  --prefix=${PWD}/installdir \
>>>>  --with-fortran \
>>>>  --with-fortran-kernels=true \
>>>>  --with-cuda \
>>>>  --download-fblaslapack \
>>>>  --with-scalar-type=complex \
>>>>  --with-precision=double \
>>>>  --with-debugging=yes \
>>>>  --with-x=0 \
>>>>  --with-gnu-compilers=1 \
>>>>  --with-cc=mpicc \
>>>>  --with-cxx=mpicxx \
>>>>  --with-fc=mpif90 \
>>>>  --with-make-exec=make
>>>>
>>>> Within my code, matrices and vectors are allocated with the following
>>>> commands:
>>>> PetscCallA( DMDACreate3d(PETSC_COMM_WORLD, DM_BOUNDARY_NONE,
>>>> DM_BOUNDARY_NONE, DM_BOUNDARY_NONE, DMDA_STENCIL_BOX, l+1, m+1, nzn,
>>>> PETSC_DECIDE, PETSC_DECIDE, PETSC_DECIDE, i3, i1, PETSC_NULL_INTEGER,
>>>> PETSC_NULL_INTEGER, PETSC_NULL_INTEGER, da, ierr) )
>>>>
>>>> PetscCallA( DMSetUp(da,ierr) )
>>>>
>>>> PetscCallA( DMCreateGlobalVector(da,b,ierr) )
>>>> PetscCallA( VecDuplicate(b,xsol,ierr) )
>>>> PetscCallA( VecDuplicate(b,src,ierr) )
>>>> PetscCallA( VecDuplicate(b,rhoxyz,ierr) )
>>>>
>>>> PetscCallA( DMCreateLocalVector(da,localx,ierr) )
>>>> PetscCallA( VecDuplicate(localx,localb,ierr) )
>>>> PetscCallA( VecDuplicate(localx,localsrc,ierr) )
>>>> PetscCallA( VecDuplicate(localx,lrhoxyz,ierr) )
>>>>
>>>> PetscCallA( VecGetLocalSize(xsol,mloc,ierr) )
>>>>
>>>> ngrow=3*(l+1)*(m+1)*nzn
>>>>
>>>> PetscCallA( MatCreate(PETSC_COMM_WORLD,A,ierr) )
>>>> PetscCallA( MatSetOptionsPrefix(A,'A_',ierr) )
>>>> PetscCallA( MatSetSizes(A,mloc,mloc,ngrow,ngrow,ierr) )
>>>> PetscCallA( MatSetFromOptions(A,ierr) )
>>>> PetscCallA( MatSeqAIJSetPreallocation(A,i15,PETSC_NULL_INTEGER,ierr) )
>>>> PetscCallA( MatSeqBAIJSetPreallocation(A, i3, i15, PETSC_NULL_INTEGER,
>>>> ierr) )
>>>>
>>>> PetscCallA( MatMPIAIJSetPreallocation(A, i15, PETSC_NULL_INTEGER, i15,
>>>> PETSC_NULL_INTEGER, ierr) )
>>>>
>>>> PetscCallA( MatMPIBAIJSetPreallocation(A, i3, i15, PETSC_NULL_INTEGER,
>>>> i15, PETSC_NULL_INTEGER, ierr) )
>>>>
>>>> PetscCallA( MatCreate(PETSC_COMM_WORLD, P, ierr) )
>>>> PetscCallA( MatSetOptionsPrefix(P, 'P_', ierr) )
>>>> PetscCallA( MatSetSizes(P, mloc, mloc, ngrow, ngrow, ierr) )
>>>> PetscCallA( MatSetFromOptions(P, ierr) )
>>>> PetscCallA( MatSeqAIJSetPreallocation(P, i15, PETSC_NULL_INTEGER, ierr)
>>>> )
>>>> PetscCallA( MatSeqBAIJSetPreallocation(P, i3, i15, PETSC_NULL_INTEGER,
>>>> ierr) )
>>>>
>>>> PetscCallA( MatMPIAIJSetPreallocation(P, i15, PETSC_NULL_INTEGER, i15,
>>>> PETSC_NULL_INTEGER, ierr) )
>>>> PetscCallA( MatMPIBAIJSetPreallocation(P, i3, i15, PETSC_NULL_INTEGER,
>>>> i15, PETSC_NULL_INTEGER, ierr) )
>>>>
>>>> PetscCallA( DMDAGetInfo(da, PETSC_NULL_INTEGER, mx, my, mz,
>>>> PETSC_NULL_INTEGER, PETSC_NULL_INTEGER, PETSC_NULL_INTEGER,
>>>> PETSC_NULL_INTEGER, PETSC_NULL_INTEGER,
>>>> PETSC_NULL_INTEGER,PETSC_NULL_INTEGER,PETSC_NULL_INTEGER,PETSC_NULL_INTEGER,ierr)
>>>> )
>>>> PetscCallA( DMDAGetCorners(da,xs,ys,zs,xm,ym,zm,ierr) )
>>>> PetscCallA( DMDAGetGhostCorners(da,gxs,gys,gzs,gxm,gym,gzm,ierr) )
>>>>
>>>> PetscCallA( DMLocalToGlobal(da,localsrc,INSERT_VALUES,src,ierr) )
>>>> PetscCallA( DMGlobalToLocalBegin(da, src, INSERT_VALUES, localsrc,
>>>> ierr) )
>>>> PetscCallA( DMGlobalToLocalEnd(da,src,INSERT_VALUES,localsrc,ierr) )
>>>> PetscCallA( DMLocalToGlobal(da,localb,INSERT_VALUES,b,ierr) )
>>>>
>>>> When calling the solver function
>>>> PetscCallA( KSPSolve(ksp,b,xsol,ierr) )
>>>> the following error occurs:
>>>>
>>>> [0]PETSC ERROR: --------------------- Error Message --------------------------------------------------------------[0]PETSC ERROR: Invalid argument[0]PETSC ERROR: Object (seq) is not seqcuda or mpicuda[0]PETSC ERROR: WARNING! There are option(s) set that were not used! Could be the program crashed before they were used or a spelling mistake, etc![0]PETSC ERROR: Option left: name:-vec_type value: cuda[0]PETSC ERROR: See https://petsc.org/release/faq/ for trouble shooting.[0]PETSC ERROR: Petsc Release Version 3.18.4, unknown
>>>>
>>>> The code is executed with the following flags:
>>>> ./${executable} \
>>>>  -A_mat_type aijcusparse \
>>>>  -P_mat_type aijcusparse \
>>>>  -vec_type cuda \
>>>>  -use_gpu_aware_mpi 0 \
>>>>  -em_ksp_monitor_true_residual \
>>>>  -em_ksp_type bcgs \
>>>>  -em_pc_type bjacobi \
>>>>  -em_sub_pc_type ilu \
>>>>  -em_sub_pc_factor_levels 3 \
>>>>  -em_sub_pc_factor_fill 6 \
>>>>  < ./Parameters.inp \
>>>>
>>>> I've already tested using mpiaijcusparse for matrix and mpicuda for
>>>> vector and the error continues.
>>>>
>>>> Would anyone have an idea what I might be doing wrong?
>>>>
>>>> Sincerely,
>>>> Ramoni Z. S. Azevedo
>>>>
>>>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20230921/5218972e/attachment-0001.html>


More information about the petsc-users mailing list