<meta http-equiv="Content-Type" content="text/html; charset=utf-8"><div dir="auto">I will take a look at it and get back to you. Thanks.</div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Fri, Feb 28, 2020, 7:29 AM jordic <<a href="mailto:jordic@cttc.upc.edu">jordic@cttc.upc.edu</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div style="font-size:10pt;font-family:Verdana,Geneva,sans-serif">
<p>Dear all,</p>
<p>the following simple program:</p>
<p>//////////////////////////////////////////////////////////////////////////////////////</p>
<p>#include <petscmat.h></p>
<p>PetscInt ierr=0;<br>int main(int argc,char **argv)<br>{<br> MPI_Comm comm;<br> PetscMPIInt rank,size;</p>
<p> PetscInitialize(&argc,&argv,NULL,help);if (ierr) return ierr;<br> comm = PETSC_COMM_WORLD;<br> MPI_Comm_rank(comm,&rank);<br> MPI_Comm_size(comm,&size);</p>
<p> Mat A;<br> MatCreate(comm, &A);<br> MatSetSizes(A, 1, 1, PETSC_DETERMINE, PETSC_DETERMINE);<br> MatSetFromOptions(A);<br> PetscInt dnz=1, onz=0;<br> MatMPIAIJSetPreallocation(A, 0, &dnz, 0, &onz);<br> MatSetOption(A, MAT_NO_OFF_PROC_ENTRIES, PETSC_TRUE);<br> MatSetOption(A, MAT_IGNORE_ZERO_ENTRIES, PETSC_TRUE);<br> PetscInt igid=rank, jgid=rank;<br> PetscScalar value=rank+1.0;</p>
<p>// for(int i=0; i<10; ++i)<br> for(;;) //infinite loop<br> {<br> MatSetValue(A, igid, jgid, value, INSERT_VALUES);<br> MatAssemblyBegin(A, MAT_FINAL_ASSEMBLY);<br> MatAssemblyEnd(A, MAT_FINAL_ASSEMBLY);<br> }<br> MatDestroy(&A);<br> PetscFinalize();<br> return ierr;<br>}</p>
<p>//////////////////////////////////////////////////////////////////////////////////////</p>
<p><span style="font-size:10pt">creates a simple diagonal matrix with one value per mpi-core. If the type of the matrix is </span><span style="font-size:10pt">"mpiaij" (-mat_type mpiaij) there is no problem but with "mpiaijcusparse" (-mat_type mpiaijcusparse) the memory usage at the GPU grows with every iteration of the infinite loop. The only solution that I found is to destroy and create the matrix every time that it needs to be updated. Is there a better way to avoid this problem?</span></p>
<p>I am using Petsc Release Version 3.12.2 with this configure options:</p>
<p>Configure options --package-prefix-hash=/home_nobck/user/petsc-hash-pkgs --with-debugging=0 --with-fc=0 CC=gcc CXX=g++ --COPTFLAGS="-g -O3" --CXXOPTFLAGS="-g -O3" --CUDAOPTFLAGS="-D_FORCE_INLINES -g -O3" --with-mpi-include=/usr/lib/openmpi/include --with-mpi-lib="-L/usr/lib/openmpi/lib -lmpi_cxx -lmpi" --with-cuda=1 --with-precision=double --with-cuda-include=/usr/include --with-cuda-lib="-L/usr/lib/x86_64-linux-gnu -lcuda -lcudart -lcublas -lcufft -lcusparse -lcusolver" PETSC_ARCH=arch-ci-linux-opt-cxx-cuda-double</p>
<p>Thanks for your help,</p>
<p>Jorge</p>
</div>
</blockquote></div>