<div dir="ltr">Talked too fast,<div><br></div><div>After fixing that problem, i tried more than one mpi processor and got the following:</div><div><br></div><div><div> Matrix type: mpiaijviennacl </div><div> Of sizes: 125 x 125</div><div> Matrix type: mpiaijviennacl </div><div> Of sizes: 125 x 125</div><div>[0]PETSC ERROR: --------------------- Error Message --------------------------------------------------------------</div><div>[0]PETSC ERROR: No support for this operation for this object type</div><div>[0]PETSC ERROR: Currently only handles ViennaCL matrices</div><div>[0]PETSC ERROR: See <a href="http://www.mcs.anl.gov/petsc/documentation/faq.html">http://www.mcs.anl.gov/petsc/documentation/faq.html</a> for trouble shooting.</div><div>[0]PETSC ERROR: Petsc Development GIT revision: v3.9.2-549-g779ab53 GIT Date: 2018-05-31 17:31:13 +0300</div><div>[0]PETSC ERROR: ./gcmLEP.GPU on a cuda-debug named node50 by valera Tue Aug 28 16:30:02 2018</div><div>[0]PETSC ERROR: Configure options PETSC_ARCH=cuda-debug --with-mpi-dir=/usr/lib64/openmpi --COPTFLAGS=-O2 --CXXOPTFLAGS=-O2 --FOPTFLAGS=-O2 --with-shared-libraries=1 --with-debugging=1 --with-cuda=1 --CUDAFLAGS=-arch=sm_60 --with-blaslapack-dir=/usr/lib64 --download-viennacl</div><div>[0]PETSC ERROR: #1 PCSetUp_SAVIENNACL() line 47 in /home/valera/petsc/src/ksp/pc/impls/saviennaclcuda/<a href="http://saviennacl.cu">saviennacl.cu</a></div><div>[0]PETSC ERROR: #2 PCSetUp() line 932 in /home/valera/petsc/src/ksp/pc/interface/precon.c</div><div>[1]PETSC ERROR: ------------------------------------------------------------------------</div><div>[1]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range</div><div>[1]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger</div><div>[1]PETSC ERROR: or see <a href="http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind">http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind</a></div><div>[1]PETSC ERROR: or try <a href="http://valgrind.org">http://valgrind.org</a> on GNU/linux and Apple Mac OS X to find memory corruption errors</div><div>[1]PETSC ERROR: likely location of problem given in stack below</div><div>[1]PETSC ERROR: --------------------- Stack Frames ------------------------------------</div><div>[1]PETSC ERROR: Note: The EXACT line numbers in the stack are not available,</div><div>[1]PETSC ERROR: INSTEAD the line number of the start of the function</div><div>[1]PETSC ERROR: is given.</div><div>[1]PETSC ERROR: [1] PetscTraceBackErrorHandler line 182 /home/valera/petsc/src/sys/error/errtrace.c</div><div>[1]PETSC ERROR: [1] PetscError line 352 /home/valera/petsc/src/sys/error/err.c</div><div>[1]PETSC ERROR: [1] PCSetUp_SAVIENNACL line 45 /home/valera/petsc/src/ksp/pc/impls/saviennaclcuda/<a href="http://saviennacl.cu">saviennacl.cu</a></div><div>[1]PETSC ERROR: [1] PCSetUp line 894 /home/valera/petsc/src/ksp/pc/interface/precon.c</div><div>[1]PETSC ERROR: --------------------- Error Message --------------------------------------------------------------</div><div>[1]PETSC ERROR: Signal received</div><div>[1]PETSC ERROR: See <a href="http://www.mcs.anl.gov/petsc/documentation/faq.html">http://www.mcs.anl.gov/petsc/documentation/faq.html</a> for trouble shooting.</div><div>[1]PETSC ERROR: Petsc Development GIT revision: v3.9.2-549-g779ab53 GIT Date: 2018-05-31 17:31:13 +0300</div><div>[1]PETSC ERROR: ./gcmLEP.GPU on a cuda-debug named node50 by valera Tue Aug 28 16:30:02 2018</div><div>[1]PETSC ERROR: Configure options PETSC_ARCH=cuda-debug --with-mpi-dir=/usr/lib64/openmpi --COPTFLAGS=-O2 --CXXOPTFLAGS=-O2 --FOPTFLAGS=-O2 --with-shared-libraries=1 --with-debugging=1 --with-cuda=1 --CUDAFLAGS=-arch=sm_60 --with-blaslapack-dir=/usr/lib64 --download-viennacl</div><div>[1]PETSC ERROR: #1 User provided function() line 0 in unknown file</div><div>--------------------------------------------------------------------------</div><div>MPI_ABORT was invoked on rank 1 in communicator MPI_COMM_WORLD </div><div>with errorcode 59.</div><div><br></div><div>NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.</div><div>You may or may not see output from other processes, depending on</div><div>exactly when Open MPI kills them.</div><div>--------------------------------------------------------------------------</div><div>[0]PETSC ERROR: ------------------------------------------------------------------------</div><div>[0]PETSC ERROR: Caught signal number 15 Terminate: Some process (or the batch system) has told this process to end</div><div>[0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger</div><div>[0]PETSC ERROR: or see <a href="http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind">http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind</a></div><div>[0]PETSC ERROR: or try <a href="http://valgrind.org">http://valgrind.org</a> on GNU/linux and Apple Mac OS X to find memory corruption errors</div><div>[0]PETSC ERROR: likely location of problem given in stack below</div><div>[0]PETSC ERROR: --------------------- Stack Frames ------------------------------------</div><div>[0]PETSC ERROR: Note: The EXACT line numbers in the stack are not available,</div><div>[0]PETSC ERROR: INSTEAD the line number of the start of the function</div><div>[0]PETSC ERROR: is given.</div><div>[0]PETSC ERROR: [0] PetscCommDuplicate line 130 /home/valera/petsc/src/sys/objects/tagm.c</div><div>[0]PETSC ERROR: [0] PetscHeaderCreate_Private line 34 /home/valera/petsc/src/sys/objects/inherit.c</div><div>[0]PETSC ERROR: [0] ISCreate line 35 /home/valera/petsc/src/vec/is/is/interface/isreg.c</div><div>[0]PETSC ERROR: [0] ISCreateGeneral line 668 /home/valera/petsc/src/vec/is/is/impls/general/general.c</div><div>[0]PETSC ERROR: [0] PCSetUp_SAVIENNACL line 45 /home/valera/petsc/src/ksp/pc/impls/saviennaclcuda/<a href="http://saviennacl.cu">saviennacl.cu</a></div><div>[0]PETSC ERROR: [0] PCSetUp line 894 /home/valera/petsc/src/ksp/pc/interface/precon.c</div><div>[0]PETSC ERROR: --------------------- Error Message --------------------------------------------------------------</div><div>[0]PETSC ERROR: Signal received</div><div>[0]PETSC ERROR: See <a href="http://www.mcs.anl.gov/petsc/documentation/faq.html">http://www.mcs.anl.gov/petsc/documentation/faq.html</a> for trouble shooting.</div><div>[0]PETSC ERROR: Petsc Development GIT revision: v3.9.2-549-g779ab53 GIT Date: 2018-05-31 17:31:13 +0300</div><div>[0]PETSC ERROR: ./gcmLEP.GPU on a cuda-debug named node50 by valera Tue Aug 28 16:30:02 2018</div><div>[0]PETSC ERROR: Configure options PETSC_ARCH=cuda-debug --with-mpi-dir=/usr/lib64/openmpi --COPTFLAGS=-O2 --CXXOPTFLAGS=-O2 --FOPTFLAGS=-O2 --with-shared-libraries=1 --with-debugging=1 --with-cuda=1 --CUDAFLAGS=-arch=sm_60 --with-blaslapack-dir=/usr/lib64 --download-viennacl</div><div>[0]PETSC ERROR: #3 User provided function() line 0 in unknown file</div><div>[node50:30582] 1 more process has sent help message help-mpi-api.txt / mpi-abort</div><div>[node50:30582] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages</div></div><div><br></div><div><br></div><div><br></div><div>It is currently running in 1 mpi processor + GPU but i would like to call at least 16 mpi processors + GPU to do the rest of the data management who is not part of the main laplacian on the mpi and the laplacian solution on the GPU, is this currently possible?</div><div><br></div><div>Thanks for your help,</div><div><br></div><div><br></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Aug 28, 2018 at 4:21 PM, Manuel Valera <span dir="ltr"><<a href="mailto:mvalera-w@sdsu.edu" target="_blank">mvalera-w@sdsu.edu</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Ok, i found the culprit and we can close this thread,<div><br></div><div>The problem was a missing variable for setting the maximum columns, which i deleted at some point without realizing. The error message was too ambiguous to catch this so i had to compare with a previous working version of the arguments of MatSetValues, it was evident then.</div><div><br></div><div>Good news is that i can now set the values with the viennacl types too,</div><div><br></div><div>Thanks for your kind help,</div><div><br></div><div>Manuel</div></div><div class="HOEnZb"><div class="h5"><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Aug 28, 2018 at 11:25 AM, Smith, Barry F. <span dir="ltr"><<a href="mailto:bsmith@mcs.anl.gov" target="_blank">bsmith@mcs.anl.gov</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><br>
1) PetscMalloc() is never valid or needed in Fortran<br>
<br>
2) there is no reason to use DMSetMatrixPreallocateOnly() just use DMCreateMatrix() assuming that using a DM (DMDA, DMPLEX, etc) is suitable for your problem.<br>
<br>
At this end we are totally guessing at what you are doing and so have little help we can provide. A simple, nonworking code that tries to do what you would need would help us a great deal in understanding that you are trying to do.<br>
<span class="m_-4550385536022932824HOEnZb"><font color="#888888"><br>
Barry<br>
</font></span><div class="m_-4550385536022932824HOEnZb"><div class="m_-4550385536022932824h5"><br>
<br>
<br>
<br>
<br>
> On Aug 28, 2018, at 1:18 PM, Manuel Valera <<a href="mailto:mvalera-w@sdsu.edu" target="_blank">mvalera-w@sdsu.edu</a>> wrote:<br>
> <br>
> Matthew, PetscMalloc gives the same error,<br>
> <br>
> Barry, it would be very hard for me to get the code to a minimum working example, i guess all i need to understand is how to setup a DM matrix with DMSetMatrixPreallocateOnly() instead of MatMPIAIJSetPreallocation() as we were doing before, is there a simple example who does this in Fortran?<br>
> <br>
> Is the PetscMalloc call needed? is 'call PetscMalloc(1,row,ierr)' a valid, compilable call to PetscMalloc? what other reason may there be for this error to happen ?<br>
> <br>
> Just remembering, that trying to setup the matrix with the MatAIJSetPreallocation() brings up an error to acknowledge the viennacl datatypes and that's why i'm trying to make this change on your recommendation,<br>
> <br>
> Thanks for your help,<br>
> <br>
> <br>
> <br>
> <br>
> <br>
> On Mon, Aug 27, 2018 at 7:35 PM, Smith, Barry F. <<a href="mailto:bsmith@mcs.anl.gov" target="_blank">bsmith@mcs.anl.gov</a>> wrote:<br>
> <br>
> Send your code in a way we can compile and run it; it must be some simple issue that is hard to communicate in email.<br>
> <br>
> Barry<br>
> <br>
> <br>
> > On Aug 27, 2018, at 5:51 PM, Manuel Valera <<a href="mailto:mvalera-w@sdsu.edu" target="_blank">mvalera-w@sdsu.edu</a>> wrote:<br>
> > <br>
> > Hello everyone,<br>
> > <br>
> > I just had time to work on this again, and checked the code for errors on the matrix entries, this is the exact code i was using for creating the matrix without DMSetMatrixPreallocateOnly, using MatMPIAIJSetPreallocation and it worked that way, but trying this way i get the same 'Column too large' error using any number at the column position of MatSetValues, <br>
> > <br>
> > I have set up my code to print the column argument (n) of MatSetValues and in this case is 7 (lower than 124), it still gives error, even entering a specific number in the MatSetValues column argument position gives the same error.<br>
> > <br>
> > So next i went back to ex.50 here: <a href="http://www.mcs.anl.gov/petsc/petsc-current/src/ts/examples/tutorials/ex50.c.html" rel="noreferrer" target="_blank">http://www.mcs.anl.gov/petsc/p<wbr>etsc-current/src/ts/examples/t<wbr>utorials/ex50.c.html</a> and it has a very similar structure except the PetscMalloc1() call, so i tried adding that and got:<br>
> > <br>
> > /home/valera/ParGCCOM/Src/DMDA<wbr>Laplacian.f90:114: undefined reference to `petscmalloc1_'<br>
> > <br>
> > Any ideas on this behaviour?<br>
> > <br>
> > Thanks so much,<br>
> > <br>
> > <br>
> > <br>
> > <br>
> > <br>
> > <br>
> > On Thu, Aug 16, 2018 at 11:20 AM, Smith, Barry F. <<a href="mailto:bsmith@mcs.anl.gov" target="_blank">bsmith@mcs.anl.gov</a>> wrote:<br>
> > <br>
> > Column too large: col 10980 max 124<br>
> > <br>
> > You need to check the code that is generating the matrix entries. The matrix has 124 columns but you are attempting to put a value at column 10980<br>
> > <br>
> > Barry<br>
> > <br>
> > <br>
> > > On Aug 15, 2018, at 9:44 PM, Manuel Valera <<a href="mailto:mvalera-w@sdsu.edu" target="_blank">mvalera-w@sdsu.edu</a>> wrote:<br>
> > > <br>
> > > Thanks Matthew and Barry,<br>
> > > <br>
> > > Now my code looks like:<br>
> > > <br>
> > > call DMSetMatrixPreallocateOnly(daD<wbr>ummy,PETSC_TRUE,ierr)<br>
> > > call DMSetMatType(daDummy,MATMPIAIJ<wbr>VIENNACL,ierr)<br>
> > > call DMSetVecType(daDummy,VECMPIVIE<wbr>NNACL,ierr)<br>
> > > call DMCreateMatrix(daDummy,A,ierr)<br>
> > > call MatSetFromOptions(A,ierr)<br>
> > > call MatSetUp(A,ierr)<br>
> > > [...]<br>
> > > call MatSetValues(A,1,row,sumpos,po<wbr>s(0:iter-1),vals(0:iter-1),INS<wbr>ERT_VALUES,ierr)<br>
> > > [...]<br>
> > > call MatAssemblyBegin(A, MAT_FINAL_ASSEMBLY, ierr)<br>
> > > call MatAssemblyEnd(A, MAT_FINAL_ASSEMBLY, ierr)<br>
> > > <br>
> > > And i get a different error, now is:<br>
> > > <br>
> > > [0]PETSC ERROR: --------------------- Error Message ------------------------------<wbr>------------------------------<wbr>--<br>
> > > [0]PETSC ERROR: Argument out of range<br>
> > > [0]PETSC ERROR: Column too large: col 10980 max 124<br>
> > > [0]PETSC ERROR: See <a href="http://www.mcs.anl.gov/petsc/documentation/faq.html" rel="noreferrer" target="_blank">http://www.mcs.anl.gov/petsc/d<wbr>ocumentation/faq.html</a> for trouble shooting.<br>
> > > [0]PETSC ERROR: Petsc Development GIT revision: v3.9.2-549-g779ab53 GIT Date: 2018-05-31 17:31:13 +0300<br>
> > > [0]PETSC ERROR: ./gcmLEP.GPU on a cuda-debug named node50 by valera Wed Aug 15 19:40:00 2018<br>
> > > [0]PETSC ERROR: Configure options PETSC_ARCH=cuda-debug --with-mpi-dir=/usr/lib64/open<wbr>mpi --COPTFLAGS=-O2 --CXXOPTFLAGS=-O2 --FOPTFLAGS=-O2 --with-shared-libraries=1 --with-debugging=1 --with-cuda=1 --CUDAFLAGS=-arch=sm_60 --with-blaslapack-dir=/usr/lib<wbr>64 --download-viennacl<br>
> > > [0]PETSC ERROR: #1 MatSetValues_SeqAIJ() line 442 in /home/valera/petsc/src/mat/imp<wbr>ls/aij/seq/aij.c<br>
> > > [0]PETSC ERROR: #2 MatSetValues() line 1339 in /home/valera/petsc/src/mat/int<wbr>erface/matrix.c<br>
> > > <br>
> > > <br>
> > > Thanks again,<br>
> > > <br>
> > > <br>
> > > <br>
> > > <br>
> > > <br>
> > > <br>
> > > <br>
> > > <br>
> > > On Wed, Aug 15, 2018 at 7:02 PM, Smith, Barry F. <<a href="mailto:bsmith@mcs.anl.gov" target="_blank">bsmith@mcs.anl.gov</a>> wrote:<br>
> > > <br>
> > > Should be<br>
> > > <br>
> > > call DMSetMatType(daDummy,MATMPIAIJ<wbr>VIENNACL,ierr)<br>
> > > call DMSetVecType(daDummy,VECMPIVIE<wbr>NNACL,ierr)<br>
> > > call DMCreateMatrix(daDummy,A,ierr)<br>
> > > <br>
> > > and remove the rest. You need to set the type of Mat you want the DM to return BEFORE you create the matrix.<br>
> > > <br>
> > > Barry<br>
> > > <br>
> > > <br>
> > > <br>
> > > > On Aug 15, 2018, at 4:45 PM, Manuel Valera <<a href="mailto:mvalera-w@sdsu.edu" target="_blank">mvalera-w@sdsu.edu</a>> wrote:<br>
> > > > <br>
> > > > Ok thanks for clarifying that, i wasn't sure if there were different types,<br>
> > > > <br>
> > > > Here is a stripped down version of my code, it seems like the preallocation is working now since the matrix population part is working without problem, but here it is for illustration purposes:<br>
> > > > <br>
> > > > call DMSetMatrixPreallocateOnly(daD<wbr>ummy,PETSC_TRUE,ierr)<br>
> > > > call DMCreateMatrix(daDummy,A,ierr)<br>
> > > > call MatSetFromOptions(A,ierr)<br>
> > > > call DMSetMatType(daDummy,MATMPIAIJ<wbr>VIENNACL,ierr)<br>
> > > > call DMSetVecType(daDummy,VECMPIVIE<wbr>NNACL,ierr)<br>
> > > > call MatMPIAIJSetPreallocation(A,19<wbr>,PETSC_NULL_INTEGER,19,PETSC_<wbr>NULL_INTEGER,ierr)<br>
> > > > call MatSetUp(A,ierr)<br>
> > > > [...]<br>
> > > > call MatSetValues(A,1,row,sumpos,po<wbr>s(0:iter-1),vals(0:iter-1),INS<wbr>ERT_VALUES,ierr)<br>
> > > > [...]<br>
> > > > call MatAssemblyBegin(A, MAT_FINAL_ASSEMBLY, ierr)<br>
> > > > call MatAssemblyEnd(A, MAT_FINAL_ASSEMBLY, ierr)<br>
> > > > <br>
> > > > Adding the first line there did the trick,<br>
> > > > <br>
> > > > Now the problem seems to be the program is not recognizing the matrix as ViennaCL type when i try with more than one processor, i get now:<br>
> > > > <br>
> > > > [0]PETSC ERROR: --------------------- Error Message ------------------------------<wbr>------------------------------<wbr>--<br>
> > > > [0]PETSC ERROR: No support for this operation for this object type<br>
> > > > [0]PETSC ERROR: Currently only handles ViennaCL matrices<br>
> > > > [0]PETSC ERROR: See <a href="http://www.mcs.anl.gov/petsc/documentation/faq.html" rel="noreferrer" target="_blank">http://www.mcs.anl.gov/petsc/d<wbr>ocumentation/faq.html</a> for trouble shooting.<br>
> > > > [0]PETSC ERROR: Petsc Development GIT revision: v3.9.2-549-g779ab53 GIT Date: 2018-05-31 17:31:13 +0300<br>
> > > > [0]PETSC ERROR: ./gcmLEP.GPU on a cuda-debug named node50 by valera Wed Aug 15 14:44:22 2018<br>
> > > > [0]PETSC ERROR: Configure options PETSC_ARCH=cuda-debug --with-mpi-dir=/usr/lib64/open<wbr>mpi --COPTFLAGS=-O2 --CXXOPTFLAGS=-O2 --FOPTFLAGS=-O2 --with-shared-libraries=1 --with-debugging=1 --with-cuda=1 --CUDAFLAGS=-arch=sm_60 --with-blaslapack-dir=/usr/lib<wbr>64 --download-viennacl<br>
> > > > [0]PETSC ERROR: #1 PCSetUp_SAVIENNACL() line 47 in /home/valera/petsc/src/ksp/pc/<wbr>impls/saviennaclcuda/<a href="http://saviennacl.cu" rel="noreferrer" target="_blank">saviennac<wbr>l.cu</a><br>
> > > > [0]PETSC ERROR: #2 PCSetUp() line 932 in /home/valera/petsc/src/ksp/pc/<wbr>interface/precon.c<br>
> > > > [0]PETSC ERROR: #3 KSPSetUp() line 381 in /home/valera/petsc/src/ksp/ksp<wbr>/interface/itfunc.c<br>
> > > > <br>
> > > > When running with:<br>
> > > > <br>
> > > > mpirun -n 1 ./gcmLEP.GPU tc=TestCases/LockRelease/LE_6x<wbr>6x6/ jid=tiny_cuda_test_n2 -ksp_type cg -dm_vec_type viennacl -dm_mat_type aijviennacl -pc_type saviennacl -log_view<br>
> > > > <br>
> > > > <br>
> > > > Thanks,<br>
> > > > <br>
> > > > <br>
> > > > <br>
> > > > <br>
> > > > <br>
> > > > <br>
> > > > <br>
> > > > <br>
> > > > <br>
> > > > <br>
> > > > On Wed, Aug 15, 2018 at 2:32 PM, Matthew Knepley <<a href="mailto:knepley@gmail.com" target="_blank">knepley@gmail.com</a>> wrote:<br>
> > > > On Wed, Aug 15, 2018 at 5:20 PM Manuel Valera <<a href="mailto:mvalera-w@sdsu.edu" target="_blank">mvalera-w@sdsu.edu</a>> wrote:<br>
> > > > It seems to be resumed on: I do not know how to preallocate a DM Matrix correctly.<br>
> > > > <br>
> > > > There is only one matrix type, Mat. There are no separate DM matrices. A DM can create a matrix for you<br>
> > > > using DMCreateMatrix(), but that is a Mat and it is preallocated correctly. I am not sure what you are doing.<br>
> > > > <br>
> > > > Thanks,<br>
> > > > <br>
> > > > Matt<br>
> > > > <br>
> > > > The interesting part is that it only breaks when i need to populate a GPU matrix from MPI, so kudos on that, but it seems i need to do better on my code to get this setup working,<br>
> > > > <br>
> > > > Any help would be appreciated,<br>
> > > > <br>
> > > > Thanks,<br>
> > > > <br>
> > > > <br>
> > > > <br>
> > > > On Wed, Aug 15, 2018 at 2:15 PM, Matthew Knepley <<a href="mailto:knepley@gmail.com" target="_blank">knepley@gmail.com</a>> wrote:<br>
> > > > On Wed, Aug 15, 2018 at 4:53 PM Manuel Valera <<a href="mailto:mvalera-w@sdsu.edu" target="_blank">mvalera-w@sdsu.edu</a>> wrote:<br>
> > > > Thanks Matthew, <br>
> > > > <br>
> > > > I try to do that when calling:<br>
> > > > <br>
> > > > call MatMPIAIJSetPreallocation(A,19<wbr>,PETSC_NULL_INTEGER,19,PETSC_<wbr>NULL_INTEGER,ierr)<br>
> > > > <br>
> > > > But i am not aware on how to do this for the DM if it needs something more specific/different,<br>
> > > > <br>
> > > > The error says that your preallocation is wrong for the values you are putting in. The DM does not control either,<br>
> > > > so I do not understand your email.<br>
> > > > <br>
> > > > Thanks,<br>
> > > > <br>
> > > > Matt<br>
> > > > <br>
> > > > Thanks,<br>
> > > > <br>
> > > > On Wed, Aug 15, 2018 at 1:51 PM, Matthew Knepley <<a href="mailto:knepley@gmail.com" target="_blank">knepley@gmail.com</a>> wrote:<br>
> > > > On Wed, Aug 15, 2018 at 4:39 PM Manuel Valera <<a href="mailto:mvalera-w@sdsu.edu" target="_blank">mvalera-w@sdsu.edu</a>> wrote:<br>
> > > > Hello PETSc devs,<br>
> > > > <br>
> > > > I am running into an error when trying to use the MATMPIAIJVIENNACL Matrix type in MPI calls, the same code runs for MATSEQAIJVIENNACL type in one processor. The error happens when calling MatSetValues for this specific configuration. It does not occur when using MPI DMMatrix types only.<br>
> > > > <br>
> > > > The DM properly preallocates the matrix. I am assuming you do not here.<br>
> > > > <br>
> > > > Matt<br>
> > > > <br>
> > > > Any help will be appreciated, <br>
> > > > <br>
> > > > Thanks,<br>
> > > > <br>
> > > > <br>
> > > > <br>
> > > > My program call:<br>
> > > > <br>
> > > > mpirun -n 2 ./gcmLEP.GPU tc=TestCases/LockRelease/LE_6x<wbr>6x6/ jid=tiny_cuda_test_n2 -ksp_type cg -dm_vec_type viennacl -dm_mat_type aijviennacl -pc_type saviennacl -log_view <br>
> > > > <br>
> > > > <br>
> > > > The error (repeats after each MatSetValues call):<br>
> > > > <br>
> > > > [1]PETSC ERROR: --------------------- Error Message ------------------------------<wbr>------------------------------<wbr>--<br>
> > > > [1]PETSC ERROR: Argument out of range<br>
> > > > [1]PETSC ERROR: Inserting a new nonzero at global row/column (75, 50) into matrix<br>
> > > > [1]PETSC ERROR: See <a href="http://www.mcs.anl.gov/petsc/documentation/faq.html" rel="noreferrer" target="_blank">http://www.mcs.anl.gov/petsc/d<wbr>ocumentation/faq.html</a> for trouble shooting.<br>
> > > > [1]PETSC ERROR: Petsc Development GIT revision: v3.9.2-549-g779ab53 GIT Date: 2018-05-31 17:31:13 +0300<br>
> > > > [1]PETSC ERROR: ./gcmLEP.GPU on a cuda-debug named node50 by valera Wed Aug 15 13:10:44 2018<br>
> > > > [1]PETSC ERROR: Configure options PETSC_ARCH=cuda-debug --with-mpi-dir=/usr/lib64/open<wbr>mpi --COPTFLAGS=-O2 --CXXOPTFLAGS=-O2 --FOPTFLAGS=-O2 --with-shared-libraries=1 --with-debugging=1 --with-cuda=1 --CUDAFLAGS=-arch=sm_60 --with-blaslapack-dir=/usr/lib<wbr>64 --download-viennacl<br>
> > > > [1]PETSC ERROR: #1 MatSetValues_MPIAIJ() line 608 in /home/valera/petsc/src/mat/imp<wbr>ls/aij/mpi/mpiaij.c<br>
> > > > [1]PETSC ERROR: #2 MatSetValues() line 1339 in /home/valera/petsc/src/mat/int<wbr>erface/matrix.c<br>
> > > > <br>
> > > > <br>
> > > > My Code structure:<br>
> > > > <br>
> > > > call DMCreateMatrix(daDummy,A,ierr)<br>
> > > > call MatSetFromOptions(A,ierr)<br>
> > > > call MPI_Comm_size(PETSC_COMM_WORLD<wbr>, numprocs, ierr)<br>
> > > > if (numprocs > 1) then ! set matrix type parallel<br>
> > > > ! Get local size<br>
> > > > call DMDACreateNaturalVector(daDumm<wbr>y,Tmpnat,ierr)<br>
> > > > call VecGetLocalSize(Tmpnat,locsize<wbr>,ierr)<br>
> > > > call VecDestroy(Tmpnat,ierr)<br>
> > > > ! Set matrix<br>
> > > > #ifdef GPU<br>
> > > > call MatSetType(A,MATAIJVIENNACL,ie<wbr>rr)<br>
> > > > call DMSetMatType(daDummy,MATMPIAIJ<wbr>VIENNACL,ierr)<br>
> > > > call DMSetVecType(daDummy,VECMPIVIE<wbr>NNACL,ierr)<br>
> > > > print*,'SETTING GPU TYPES'<br>
> > > > #else<br>
> > > > call DMSetMatType(daDummy,MATMPIAIJ<wbr>,ierr)<br>
> > > > call DMSetMatType(daDummy,VECMPI,ie<wbr>rr)<br>
> > > > call MatSetType(A,MATMPIAIJ,ierr)!<br>
> > > > #endif<br>
> > > > call MatMPIAIJSetPreallocation(A,19<wbr>,PETSC_NULL_INTEGER,19,PETSC_<wbr>NULL_INTEGER,ierr)<br>
> > > > else ! set matrix type sequential<br>
> > > > #ifdef GPU<br>
> > > > call DMSetMatType(daDummy,MATSEQAIJ<wbr>VIENNACL,ierr)<br>
> > > > call DMSetVecType(daDummy,VECSEQVIE<wbr>NNACL,ierr)<br>
> > > > call MatSetType(A,MATSEQAIJVIENNACL<wbr>,ierr)<br>
> > > > print*,'SETTING GPU TYPES'<br>
> > > > #else<br>
> > > > call DMSetMatType(daDummy,MATSEQAIJ<wbr>,ierr)<br>
> > > > call DMSetMatType(daDummy,VECSEQ,ie<wbr>rr)<br>
> > > > call MatSetType(A,MATSEQAIJ,ierr)<br>
> > > > #endif<br>
> > > > call MatSetUp(A,ierr)<br>
> > > > call getCenterInfo(daGrid,xstart,ys<wbr>tart,zstart,xend,yend,zend)<br>
> > > > <br>
> > > > do k=zstart,zend-1<br>
> > > > do j=ystart,yend-1<br>
> > > > do i=xstart,xend-1<br>
> > > > [..]<br>
> > > > call MatSetValues(A,1,row,sumpos,po<wbr>s(0:iter-1),vals(0:iter-1),INS<wbr>ERT_VALUES,ierr)<br>
> > > > [..]<br>
> > > > <br>
> > > > <br>
> > > > <br>
> > > > <br>
> > > > <br>
> > > > <br>
> > > > -- <br>
> > > > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>
> > > > -- Norbert Wiener<br>
> > > > <br>
> > > > <a href="https://www.cse.buffalo.edu/~knepley/" rel="noreferrer" target="_blank">https://www.cse.buffalo.edu/~k<wbr>nepley/</a><br>
> > > > <br>
> > > > <br>
> > > > <br>
> > > > -- <br>
> > > > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>
> > > > -- Norbert Wiener<br>
> > > > <br>
> > > > <a href="https://www.cse.buffalo.edu/~knepley/" rel="noreferrer" target="_blank">https://www.cse.buffalo.edu/~k<wbr>nepley/</a><br>
> > > > <br>
> > > > <br>
> > > > <br>
> > > > -- <br>
> > > > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>
> > > > -- Norbert Wiener<br>
> > > > <br>
> > > > <a href="https://www.cse.buffalo.edu/~knepley/" rel="noreferrer" target="_blank">https://www.cse.buffalo.edu/~k<wbr>nepley/</a><br>
> > > > <br>
> > > <br>
> > > <br>
> > <br>
> > <br>
> <br>
> <br>
<br>
</div></div></blockquote></div><br></div>
</div></div></blockquote></div><br></div>