[petsc-users] MatSetValues error with ViennaCL types
Manuel Valera
mvalera-w at sdsu.edu
Wed Aug 15 16:10:19 CDT 2018
Ok,
I removed the MatSetType call and added DMSetMatrixPreallocateOnly before
creating the matrix and this worked out the error, but the same situation
persist, when i try to run with more than one processor it says:
[0]PETSC ERROR: --------------------- Error Message
--------------------------------------------------------------
[0]PETSC ERROR: No support for this operation for this object type
[0]PETSC ERROR: Currently only handles ViennaCL matrices
[0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for
trouble shooting.
[0]PETSC ERROR: Petsc Development GIT revision: v3.9.2-549-g779ab53 GIT
Date: 2018-05-31 17:31:13 +0300
[0]PETSC ERROR: ./gcmLEP.GPU on a cuda-debug named node50 by valera Wed Aug
15 14:09:31 2018
[0]PETSC ERROR: Configure options PETSC_ARCH=cuda-debug
--with-mpi-dir=/usr/lib64/openmpi --COPTFLAGS=-O2 --CXXOPTFLAGS=-O2
--FOPTFLAGS=-O2 --with-shared-libraries=1 --with-debugging=1 --with-cuda=1
--CUDAFLAGS=-arch=sm_60 --with-blaslapack-dir=/usr/lib64 --download-viennacl
[0]PETSC ERROR: #1 PCSetUp_SAVIENNACL() line 47 in
/home/valera/petsc/src/ksp/pc/impls/saviennaclcuda/saviennacl.cu
[0]PETSC ERROR: #2 PCSetUp() line 932 in
/home/valera/petsc/src/ksp/pc/interface/precon.c
[0]PETSC ERROR: #3 KSPSetUp() line 381 in
/home/valera/petsc/src/ksp/ksp/interface/itfunc.c
[1]PETSC ERROR:
------------------------------------------------------------------------
[1]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation,
probably memory access out of range
[1]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger
[1]PETSC ERROR: or see
http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind
[1]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X
to find memory corruption errors
[1]PETSC ERROR: likely location of problem given in stack below
[1]PETSC ERROR: --------------------- Stack Frames
------------------------------------
[1]PETSC ERROR: Note: The EXACT line numbers in the stack are not available,
[1]PETSC ERROR: INSTEAD the line number of the start of the function
[1]PETSC ERROR: is given.
[1]PETSC ERROR: [1] PetscTraceBackErrorHandler line 182
/home/valera/petsc/src/sys/error/errtrace.c
[1]PETSC ERROR: [1] PetscError line 352
/home/valera/petsc/src/sys/error/err.c
[1]PETSC ERROR: [1] PCSetUp_SAVIENNACL line 45
/home/valera/petsc/src/ksp/pc/impls/saviennaclcuda/saviennacl.cu
[1]PETSC ERROR: [1] PCSetUp line 894
/home/valera/petsc/src/ksp/pc/interface/precon.c
[1]PETSC ERROR: [1] KSPSetUp line 294
/home/valera/petsc/src/ksp/ksp/interface/itfunc.c
[1]PETSC ERROR: --------------------- Error Message
--------------------------------------------------------------
[1]PETSC ERROR: Signal received
[1]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for
trouble shooting.
[1]PETSC ERROR: Petsc Development GIT revision: v3.9.2-549-g779ab53 GIT
Date: 2018-05-31 17:31:13 +0300
[1]PETSC ERROR: ./gcmLEP.GPU on a cuda-debug named node50 by valera Wed Aug
15 14:09:31 2018
[1]PETSC ERROR: Configure options PETSC_ARCH=cuda-debug
--with-mpi-dir=/usr/lib64/openmpi --COPTFLAGS=-O2 --CXXOPTFLAGS=-O2
--FOPTFLAGS=-O2 --with-shared-libraries=1 --with-debugging=1 --with-cuda=1
--CUDAFLAGS=-arch=sm_60 --with-blaslapack-dir=/usr/lib64 --download-viennacl
Thanks,
On Wed, Aug 15, 2018 at 1:53 PM, Manuel Valera <mvalera-w at sdsu.edu> wrote:
> Thanks Matthew,
>
> I try to do that when calling:
>
> call MatMPIAIJSetPreallocation(A,19,PETSC_NULL_INTEGER,19,PETSC_
> NULL_INTEGER,ierr)
>
> But i am not aware on how to do this for the DM if it needs something more
> specific/different,
>
> Thanks,
>
> On Wed, Aug 15, 2018 at 1:51 PM, Matthew Knepley <knepley at gmail.com>
> wrote:
>
>> On Wed, Aug 15, 2018 at 4:39 PM Manuel Valera <mvalera-w at sdsu.edu> wrote:
>>
>>> Hello PETSc devs,
>>>
>>> I am running into an error when trying to use the MATMPIAIJVIENNACL
>>> Matrix type in MPI calls, the same code runs for MATSEQAIJVIENNACL type in
>>> one processor. The error happens when calling MatSetValues for this
>>> specific configuration. It does not occur when using MPI DMMatrix types
>>> only.
>>>
>>
>> The DM properly preallocates the matrix. I am assuming you do not here.
>>
>> Matt
>>
>>
>>> Any help will be appreciated,
>>>
>>> Thanks,
>>>
>>>
>>>
>>> My program call:
>>>
>>> mpirun -n 2 ./gcmLEP.GPU tc=TestCases/LockRelease/LE_6x6x6/
>>> jid=tiny_cuda_test_n2 -ksp_type cg -dm_vec_type viennacl -dm_mat_type
>>> aijviennacl -pc_type saviennacl -log_view
>>>
>>>
>>> The error (repeats after each MatSetValues call):
>>>
>>> [1]PETSC ERROR: --------------------- Error Message
>>> --------------------------------------------------------------
>>> [1]PETSC ERROR: Argument out of range
>>> [1]PETSC ERROR: Inserting a new nonzero at global row/column (75, 50)
>>> into matrix
>>> [1]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html
>>> for trouble shooting.
>>> [1]PETSC ERROR: Petsc Development GIT revision: v3.9.2-549-g779ab53 GIT
>>> Date: 2018-05-31 17:31:13 +0300
>>> [1]PETSC ERROR: ./gcmLEP.GPU on a cuda-debug named node50 by valera Wed
>>> Aug 15 13:10:44 2018
>>> [1]PETSC ERROR: Configure options PETSC_ARCH=cuda-debug
>>> --with-mpi-dir=/usr/lib64/openmpi --COPTFLAGS=-O2 --CXXOPTFLAGS=-O2
>>> --FOPTFLAGS=-O2 --with-shared-libraries=1 --with-debugging=1 --with-cuda=1
>>> --CUDAFLAGS=-arch=sm_60 --with-blaslapack-dir=/usr/lib64
>>> --download-viennacl
>>> [1]PETSC ERROR: #1 MatSetValues_MPIAIJ() line 608 in
>>> /home/valera/petsc/src/mat/impls/aij/mpi/mpiaij.c
>>> [1]PETSC ERROR: #2 MatSetValues() line 1339 in
>>> /home/valera/petsc/src/mat/interface/matrix.c
>>>
>>>
>>> My Code structure:
>>>
>>> call DMCreateMatrix(daDummy,A,ierr)
>>>> call MatSetFromOptions(A,ierr)
>>>> call MPI_Comm_size(PETSC_COMM_WORLD, numprocs, ierr)
>>>> if (numprocs > 1) then ! set matrix type parallel
>>>> ! Get local size
>>>> call DMDACreateNaturalVector(daDummy,Tmpnat,ierr)
>>>> call VecGetLocalSize(Tmpnat,locsize,ierr)
>>>> call VecDestroy(Tmpnat,ierr)
>>>> ! Set matrix
>>>> #ifdef GPU
>>>> call MatSetType(A,MATAIJVIENNACL,ierr)
>>>> call DMSetMatType(daDummy,MATMPIAIJVIENNACL,ierr)
>>>> call DMSetVecType(daDummy,VECMPIVIENNACL,ierr)
>>>> print*,'SETTING GPU TYPES'
>>>> #else
>>>> call DMSetMatType(daDummy,MATMPIAIJ,ierr)
>>>> call DMSetMatType(daDummy,VECMPI,ierr)
>>>> call MatSetType(A,MATMPIAIJ,ierr)!
>>>> #endif
>>>> call MatMPIAIJSetPreallocation(A,19,PETSC_NULL_INTEGER,19,PETSC_
>>>> NULL_INTEGER,ierr)
>>>> else ! set matrix type sequential
>>>> #ifdef GPU
>>>> call DMSetMatType(daDummy,MATSEQAIJVIENNACL,ierr)
>>>> call DMSetVecType(daDummy,VECSEQVIENNACL,ierr)
>>>> call MatSetType(A,MATSEQAIJVIENNACL,ierr)
>>>> print*,'SETTING GPU TYPES'
>>>> #else
>>>> call DMSetMatType(daDummy,MATSEQAIJ,ierr)
>>>> call DMSetMatType(daDummy,VECSEQ,ierr)
>>>> call MatSetType(A,MATSEQAIJ,ierr)
>>>> #endif
>>>> call MatSetUp(A,ierr)
>>>> call getCenterInfo(daGrid,xstart,ystart,zstart,xend,yend,zend)
>>>>
>>>
>>>
>>>> do k=zstart,zend-1
>>>> do j=ystart,yend-1
>>>> do i=xstart,xend-1
>>>> [..]
>>>> call MatSetValues(A,1,row,sumpos,po
>>>> s(0:iter-1),vals(0:iter-1),INSERT_VALUES,ierr)
>>>> [..]
>>>
>>>
>>>
>>>
>>>
>>>
>>
>> --
>> What most experimenters take for granted before they begin their
>> experiments is infinitely more interesting than any results to which their
>> experiments lead.
>> -- Norbert Wiener
>>
>> https://www.cse.buffalo.edu/~knepley/ <http://www.caam.rice.edu/~mk51/>
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20180815/2db0ee6f/attachment.html>
More information about the petsc-users
mailing list