[petsc-users] MatSetValues error with ViennaCL types

Matthew Knepley knepley at gmail.com
Wed Aug 15 15:51:39 CDT 2018


On Wed, Aug 15, 2018 at 4:39 PM Manuel Valera <mvalera-w at sdsu.edu> wrote:

> Hello PETSc devs,
>
> I am running into an error when trying to use the MATMPIAIJVIENNACL Matrix
> type in MPI calls, the same code runs for MATSEQAIJVIENNACL type in one
> processor. The error happens when calling MatSetValues for this specific
> configuration. It does not occur when using MPI DMMatrix types only.
>

The DM properly preallocates the matrix. I am assuming you do not here.

   Matt


> Any help will be appreciated,
>
> Thanks,
>
>
>
> My program call:
>
> mpirun -n 2 ./gcmLEP.GPU tc=TestCases/LockRelease/LE_6x6x6/
> jid=tiny_cuda_test_n2 -ksp_type cg -dm_vec_type viennacl -dm_mat_type
> aijviennacl -pc_type saviennacl -log_view
>
>
> The error (repeats after each MatSetValues call):
>
> [1]PETSC ERROR: --------------------- Error Message
> --------------------------------------------------------------
> [1]PETSC ERROR: Argument out of range
> [1]PETSC ERROR: Inserting a new nonzero at global row/column (75, 50) into
> matrix
> [1]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html
> for trouble shooting.
> [1]PETSC ERROR: Petsc Development GIT revision: v3.9.2-549-g779ab53  GIT
> Date: 2018-05-31 17:31:13 +0300
> [1]PETSC ERROR: ./gcmLEP.GPU on a cuda-debug named node50 by valera Wed
> Aug 15 13:10:44 2018
> [1]PETSC ERROR: Configure options PETSC_ARCH=cuda-debug
> --with-mpi-dir=/usr/lib64/openmpi --COPTFLAGS=-O2 --CXXOPTFLAGS=-O2
> --FOPTFLAGS=-O2 --with-shared-libraries=1 --with-debugging=1 --with-cuda=1
> --CUDAFLAGS=-arch=sm_60 --with-blaslapack-dir=/usr/lib64 --download-viennacl
> [1]PETSC ERROR: #1 MatSetValues_MPIAIJ() line 608 in
> /home/valera/petsc/src/mat/impls/aij/mpi/mpiaij.c
> [1]PETSC ERROR: #2 MatSetValues() line 1339 in
> /home/valera/petsc/src/mat/interface/matrix.c
>
>
> My Code structure:
>
> call DMCreateMatrix(daDummy,A,ierr)
>> call MatSetFromOptions(A,ierr)
>> call MPI_Comm_size(PETSC_COMM_WORLD, numprocs, ierr)
>> if (numprocs > 1) then  ! set matrix type parallel
>>     ! Get local size
>>     call DMDACreateNaturalVector(daDummy,Tmpnat,ierr)
>>     call VecGetLocalSize(Tmpnat,locsize,ierr)
>>     call VecDestroy(Tmpnat,ierr)
>>     ! Set matrix
>> #ifdef GPU
>>     call MatSetType(A,MATAIJVIENNACL,ierr)
>>     call DMSetMatType(daDummy,MATMPIAIJVIENNACL,ierr)
>>     call DMSetVecType(daDummy,VECMPIVIENNACL,ierr)
>>     print*,'SETTING GPU TYPES'
>> #else
>>     call DMSetMatType(daDummy,MATMPIAIJ,ierr)
>>     call DMSetMatType(daDummy,VECMPI,ierr)
>>     call MatSetType(A,MATMPIAIJ,ierr)!
>> #endif
>>     call
>> MatMPIAIJSetPreallocation(A,19,PETSC_NULL_INTEGER,19,PETSC_NULL_INTEGER,ierr)
>> else                    ! set matrix type sequential
>> #ifdef GPU
>>     call DMSetMatType(daDummy,MATSEQAIJVIENNACL,ierr)
>>     call DMSetVecType(daDummy,VECSEQVIENNACL,ierr)
>>     call MatSetType(A,MATSEQAIJVIENNACL,ierr)
>>     print*,'SETTING GPU TYPES'
>> #else
>>     call DMSetMatType(daDummy,MATSEQAIJ,ierr)
>>     call DMSetMatType(daDummy,VECSEQ,ierr)
>>     call MatSetType(A,MATSEQAIJ,ierr)
>> #endif
>> call MatSetUp(A,ierr)
>> call getCenterInfo(daGrid,xstart,ystart,zstart,xend,yend,zend)
>>
>
>
>> do k=zstart,zend-1
>>     do j=ystart,yend-1
>>         do i=xstart,xend-1
>> [..]
>>            call
>> MatSetValues(A,1,row,sumpos,pos(0:iter-1),vals(0:iter-1),INSERT_VALUES,ierr)
>> [..]
>
>
>
>
>
>

-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ <http://www.caam.rice.edu/~mk51/>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20180815/cd5db448/attachment.html>


More information about the petsc-users mailing list