<div dir="ltr">Ok thanks Matt, i made a smaller case with only the linear solver and a 25x25 matrix, the error i have in this case is:<div><br></div><div><div>[valera@node50 alone]$ mpirun -n 1 ./linsolve -vec_type cusp -mat_type aijcusparse</div><div> laplacian.petsc !</div><div> TrivSoln loaded, size: 125 / 125</div><div> RHS loaded, size: 125 / 125</div><div>[0]PETSC ERROR: --------------------- Error Message --------------------------------------------------------------</div><div>[0]PETSC ERROR: Null argument, when expecting valid pointer</div><div>[0]PETSC ERROR: Null Pointer: Parameter # 4</div><div>[0]PETSC ERROR: See <a href="http://www.mcs.anl.gov/petsc/documentation/faq.html">http://www.mcs.anl.gov/petsc/documentation/faq.html</a> for trouble shooting.</div><div>[0]PETSC ERROR: Petsc Development GIT revision: v3.8.3-1817-g96b6f8a GIT Date: 2018-02-28 10:19:08 -0600</div><div>[0]PETSC ERROR: ./linsolve on a cuda named node50 by valera Wed Mar 14 10:24:35 2018</div><div>[0]PETSC ERROR: Configure options PETSC_ARCH=cuda --with-cc=mpicc --with-cxx=mpic++ --with-fc=mpifort --COPTFLAGS=-O3 --CXXOPTFLAGS=-O3 --FOPTFLAGS=-O3 --with-shared-libraries=1 --with-debugging=1 --with-cuda=1 --with-cuda-arch=sm_60 --with-cusp=1 --with-cusp-dir=/home/valera/cusp --with-vienacl=1 --download-fblaslapack=1 --download-hypre</div><div>[0]PETSC ERROR: #1 VecSetValues() line 851 in /home/valera/petsc/src/vec/vec/interface/rvector.c</div><div>[0]PETSC ERROR: --------------------- Error Message --------------------------------------------------------------</div><div>[0]PETSC ERROR: Invalid argument</div><div>[0]PETSC ERROR: Object (seq) is not seqcusp or mpicusp</div><div>[0]PETSC ERROR: See <a href="http://www.mcs.anl.gov/petsc/documentation/faq.html">http://www.mcs.anl.gov/petsc/documentation/faq.html</a> for trouble shooting.</div><div>[0]PETSC ERROR: Petsc Development GIT revision: v3.8.3-1817-g96b6f8a GIT Date: 2018-02-28 10:19:08 -0600</div><div>[0]PETSC ERROR: ./linsolve on a cuda named node50 by valera Wed Mar 14 10:24:35 2018</div><div>[0]PETSC ERROR: Configure options PETSC_ARCH=cuda --with-cc=mpicc --with-cxx=mpic++ --with-fc=mpifort --COPTFLAGS=-O3 --CXXOPTFLAGS=-O3 --FOPTFLAGS=-O3 --with-shared-libraries=1 --with-debugging=1 --with-cuda=1 --with-cuda-arch=sm_60 --with-cusp=1 --with-cusp-dir=/home/valera/cusp --with-vienacl=1 --download-fblaslapack=1 --download-hypre</div><div>[0]PETSC ERROR: #2 VecCUSPGetArrayRead() line 1792 in /home/valera/petsc/src/vec/vec/impls/seq/seqcusp/<a href="http://veccusp2.cu">veccusp2.cu</a></div><div>[0]PETSC ERROR: #3 VecAXPY_SeqCUSP() line 314 in /home/valera/petsc/src/vec/vec/impls/seq/seqcusp/<a href="http://veccusp2.cu">veccusp2.cu</a></div><div>[0]PETSC ERROR: #4 VecAXPY() line 612 in /home/valera/petsc/src/vec/vec/interface/rvector.c</div><div>[0]PETSC ERROR: #5 KSPSolve_GCR_cycle() line 60 in /home/valera/petsc/src/ksp/ksp/impls/gcr/gcr.c</div><div>[0]PETSC ERROR: #6 KSPSolve_GCR() line 114 in /home/valera/petsc/src/ksp/ksp/impls/gcr/gcr.c</div><div>[0]PETSC ERROR: #7 KSPSolve() line 669 in /home/valera/petsc/src/ksp/ksp/interface/itfunc.c</div><div> soln maxval: 0.0000000000000000 </div><div> soln minval: 0.0000000000000000 </div><div> Norm: 11.180339887498949 </div><div> Its: 0</div><div>WARNING! There are options you set that were not used!</div><div>WARNING! could be spelling mistake, etc!</div><div>Option left: name:-mat_type value: aijcusparse</div><div>[valera@node50 alone]$ </div></div><div><br></div><div><br></div><div>I also see the configure options are not correct, so i guess is still linking a different petsc installation, but maybe we can try to make it work as it is, i will let you know if i am able to link the correct petsc installation here,</div><div><br></div><div>Best,</div><div><br></div><div><br></div><div><br></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Sun, Mar 11, 2018 at 9:00 AM, Matthew Knepley <span dir="ltr"><<a href="mailto:knepley@gmail.com" target="_blank">knepley@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><span class="">On Fri, Mar 9, 2018 at 3:05 AM, Manuel Valera <span dir="ltr"><<a href="mailto:mvalera-w@mail.sdsu.edu" target="_blank">mvalera-w@mail.sdsu.edu</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Hello all,<div><br></div><div>I am working on porting a linear solver into GPUs for timing purposes, so far i've been able to compile and run the CUSP libraries and compile PETSc to be used with CUSP and ViennaCL, after the initial runs i noticed some errors, they are different for different flags and i would appreciate any help interpreting them,</div><div><br></div><div>The only elements in this program that use PETSc are the laplacian matrix (sparse), the RHS and X vectors and a scatter petsc object, so i would say it's safe to pass the command line arguments for the Mat/VecSetType()s instead of changing the source code,</div><div><br></div><div>If i use <b>-vec_type cuda -mat_type aijcusparse</b> or <b>-vec_type viennacl -mat_type aijviennacl </b>i get the following:</div></div></blockquote><div><br></div></span><div>These systems do not properly propagate errors. My only advice is to run a smaller problem and see.</div><span class=""><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><div>[0]PETSC ERROR: ------------------------------<wbr>------------------------------<wbr>------------</div><div>[0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range</div><div>[0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger</div><div>[0]PETSC ERROR: or see <a href="http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind" target="_blank">http://www.mcs.anl.gov/petsc/d<wbr>ocumentation/faq.html#valgrind</a></div><div>[0]PETSC ERROR: or try <a href="http://valgrind.org" target="_blank">http://valgrind.org</a> on GNU/linux and Apple Mac OS X to find memory corruption errors</div><div>[0]PETSC ERROR: likely location of problem given in stack below</div><div>[0]PETSC ERROR: --------------------- Stack Frames ------------------------------<wbr>------</div><div>[0]PETSC ERROR: Note: The EXACT line numbers in the stack are not available,</div><div>[0]PETSC ERROR: INSTEAD the line number of the start of the function</div><div>[0]PETSC ERROR: is given.</div><div>[0]PETSC ERROR: [0] VecSetValues line 847 /home/valera/petsc/src/vec/vec<wbr>/interface/rvector.c</div><div>[0]PETSC ERROR: [0] VecSetType line 36 /home/valera/petsc/src/vec/vec<wbr>/interface/vecreg.c</div><div>[0]PETSC ERROR: [0] VecSetTypeFromOptions_Private line 1230 /home/valera/petsc/src/vec/vec<wbr>/interface/vector.c</div><div>[0]PETSC ERROR: [0] VecSetFromOptions line 1271 /home/valera/petsc/src/vec/vec<wbr>/interface/vector.c</div><div>[0]PETSC ERROR: --------------------- Error Message ------------------------------<wbr>------------------------------<wbr>--</div><div>[0]PETSC ERROR: Signal received</div><div>[0]PETSC ERROR: See <a href="http://www.mcs.anl.gov/petsc/documentation/faq.html" target="_blank">http://www.mcs.anl.gov/petsc/d<wbr>ocumentation/faq.html</a> for trouble shooting.</div><div>[0]PETSC ERROR: Petsc Development GIT revision: v3.8.3-1817-g96b6f8a GIT Date: 2018-02-28 10:19:08 -0600</div><div>[0]PETSC ERROR: ./gcmSeamount on a cuda named node50 by valera Thu Mar 8 09:50:51 2018</div><div>[0]PETSC ERROR: Configure options PETSC_ARCH=cuda --with-cc=mpicc --with-cxx=mpic++ --with-fc=mpifort --COPTFLAGS=-O3 --CXXOPTFLAGS=-O3 --FOPTFLAGS=-O3 --with-shared-libraries=1 --with-debugging=1 --with-cuda=1 --with-cuda-arch=sm_60 --with-cusp=1 --with-cusp-dir=/home/valera/c<wbr>usp --with-vienacl=1 --download-fblaslapack=1 --download-hypre</div><div>[0]PETSC ERROR: #5 User provided function() line 0 in unknown file</div><div>------------------------------<wbr>------------------------------<wbr>--------------</div></div><div><br></div><div>This seems to be a memory out of range, maybe my vector is too big for my CUDA system? how do i assess that?</div><div><br></div><div><br></div><div>Next, if i use <b>-vec_type cusp -mat_type aijcusparse </b>i get something different and more interesting:</div></div></blockquote><div><br></div></span><div>We need to see the entire error message, since it has the stack.</div><div><br></div><div>This seems like a logic error, but could definitely be on our end. Here is how I think about these:</div><div><br></div><div> 1) We have nightly test solves, so at least some solver configuration works</div><div><br></div><div> 2) Some vector which is marked read-only (happens for input to solvers), but someone is trying to update it.</div><div> The stack will tell me where this is happening.</div><div><br></div><div> Thanks,</div><div><br></div><div> Matt</div><div><div class="h5"><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><div>[0]PETSC ERROR: --------------------- Error Message ------------------------------<wbr>------------------------------<wbr>--</div><div>[0]PETSC ERROR: Object is in wrong state</div><div>[0]PETSC ERROR: Vec is locked read only, argument # 3</div><div>[0]PETSC ERROR: See <a href="http://www.mcs.anl.gov/petsc/documentation/faq.html" target="_blank">http://www.mcs.anl.gov/petsc/d<wbr>ocumentation/faq.html</a> for trouble shooting.</div><div>[0]PETSC ERROR: Petsc Development GIT revision: v3.8.3-1817-g96b6f8a GIT Date: 2018-02-28 10:19:08 -0600</div><div>[0]PETSC ERROR: ./gcmSeamount on a cuda named node50 by valera Thu Mar 8 10:02:19 2018</div><div>[0]PETSC ERROR: Configure options PETSC_ARCH=cuda --with-cc=mpicc --with-cxx=mpic++ --with-fc=mpifort --COPTFLAGS=-O3 --CXXOPTFLAGS=-O3 --FOPTFLAGS=-O3 --with-shared-libraries=1 --with-debugging=1 --with-cuda=1 --with-cuda-arch=sm_60 --with-cusp=1 --with-cusp-dir=/home/valera/c<wbr>usp --with-vienacl=1 --download-fblaslapack=1 --download-hypre</div><div>[0]PETSC ERROR: #48 KSPSolve() line 615 in /home/valera/petsc/src/ksp/ksp<wbr>/interface/itfunc.c</div><div> PETSC_SOLVER_ONLY 6.8672990892082453E-005 s</div><div>[0]PETSC ERROR: --------------------- Error Message ------------------------------<wbr>------------------------------<wbr>--</div><div>[0]PETSC ERROR: Invalid argument</div><div>[0]PETSC ERROR: Object (seq) is not seqcusp or mpicusp</div><div>[0]PETSC ERROR: See <a href="http://www.mcs.anl.gov/petsc/documentation/faq.html" target="_blank">http://www.mcs.anl.gov/petsc/d<wbr>ocumentation/faq.html</a> for trouble shooting.</div><div>[0]PETSC ERROR: Petsc Development GIT revision: v3.8.3-1817-g96b6f8a GIT Date: 2018-02-28 10:19:08 -0600</div><div>[0]PETSC ERROR: ./gcmSeamount on a cuda named node50 by valera Thu Mar 8 10:02:19 2018</div><div>[0]PETSC ERROR: Configure options PETSC_ARCH=cuda --with-cc=mpicc --with-cxx=mpic++ --with-fc=mpifort --COPTFLAGS=-O3 --CXXOPTFLAGS=-O3 --FOPTFLAGS=-O3 --with-shared-libraries=1 --with-debugging=1 --with-cuda=1 --with-cuda-arch=sm_60 --with-cusp=1 --with-cusp-dir=/home/valera/c<wbr>usp --with-vienacl=1 --download-fblaslapack=1 --download-hypre</div><div>[0]PETSC ERROR: #49 VecCUSPGetArrayReadWrite() line 1718 in /home/valera/petsc/src/vec/vec<wbr>/impls/seq/seqcusp/<a href="http://veccusp2.cu" target="_blank">veccusp2.cu</a></div><div>[0]PETSC ERROR: #50 VecScatterCUSP_StoS() line 269 in /home/valera/petsc/src/vec/vec<wbr>/impls/seq/seqcusp/<a href="http://vecscattercusp.cu" target="_blank">vecscatterc<wbr>usp.cu</a></div></div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div><div>And it yields a "solution" to the system and also a log at the end:</div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div><div><div>./gcmSeamount on a cuda named node50 with 1 processor, by valera Thu Mar 8 10:02:24 2018</div><div>Using Petsc Development GIT revision: v3.8.3-1817-g96b6f8a GIT Date: 2018-02-28 10:19:08 -0600</div><div><br></div><div> Max Max/Min Avg Total </div><div>Time (sec): 4.573e+00 1.00000 4.573e+00</div><div>Objects: 8.100e+01 1.00000 8.100e+01</div><div>Flop: 3.492e+07 1.00000 3.492e+07 3.492e+07</div><div>Flop/sec: 7.637e+06 1.00000 7.637e+06 7.637e+06</div><div>Memory: 2.157e+08 1.00000 2.157e+08</div><div>MPI Messages: 0.000e+00 0.00000 0.000e+00 0.000e+00</div><div>MPI Message Lengths: 0.000e+00 0.00000 0.000e+00 0.000e+00</div><div>MPI Reductions: 0.000e+00 0.00000</div><div><br></div><div>Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract)</div><div> e.g., VecAXPY() for real vectors of length N --> 2N flop</div><div> and VecAXPY() for complex vectors of length N --> 8N flop</div><div><br></div><div>Summary of Stages: ----- Time ------ ----- Flop ----- --- Messages --- -- Message Lengths -- -- Reductions --</div><div> Avg %Total Avg %Total counts %Total Avg %Total counts %Total </div><div> 0: Main Stage: 4.5729e+00 100.0% 3.4924e+07 100.0% 0.000e+00 0.0% 0.000e+00 0.0% 0.000e+00 0.0% </div><div><br></div><div>------------------------------<wbr>------------------------------<wbr>------------------------------<wbr>------------------------------</div><div>See the 'Profiling' chapter of the users' manual for details on interpreting output.</div><div>Phase summary info:</div><div> Count: number of times phase was executed</div><div> Time and Flop: Max - maximum over all processors</div><div> Ratio - ratio of maximum to minimum over all processors</div><div> Mess: number of messages sent</div><div> Avg. len: average message length (bytes)</div><div> Reduct: number of global reductions</div><div> Global: entire computation</div><div> Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop().</div><div> %T - percent time in this phase %F - percent flop in this phase</div><div> %M - percent messages in this phase %L - percent message lengths in this phase</div><div> %R - percent reductions in this phase</div><div> Total Mflop/s: 10e-6 * (sum of flop over all processors)/(max time over all processors)</div><div>------------------------------<wbr>------------------------------<wbr>------------------------------<wbr>------------------------------</div><div><br></div><div><br></div><div> ##############################<wbr>############################</div><div> # #</div><div> # WARNING!!! #</div><div> # #</div><div> # This code was compiled with a debugging option, #</div><div> # To get timing results run ./configure #</div><div> # using --with-debugging=no, the performance will #</div><div> # be generally two or three times faster. #</div><div> # #</div><div> ##############################<wbr>############################</div><div><br></div><div><br></div><div>Event Count Time (sec) Flop --- Global --- --- Stage --- Total</div><div> Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s</div><div>------------------------------<wbr>------------------------------<wbr>------------------------------<wbr>------------------------------</div><div><br></div><div>--- Event Stage 0: Main Stage</div><div><br></div><div>MatLUFactorNum 1 1.0 4.9502e-02 1.0 3.49e+07 1.0 0.0e+00 0.0e+00 0.0e+00 1100 0 0 0 1100 0 0 0 706</div><div>MatILUFactorSym 1 1.0 1.9642e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0</div><div>MatAssemblyBegin 2 1.0 6.9141e-06 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0</div><div>MatAssemblyEnd 2 1.0 2.6612e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 6 0 0 0 0 6 0 0 0 0 0</div><div>MatGetRowIJ 1 1.0 5.0068e-06 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0</div><div>MatGetOrdering 1 1.0 1.7186e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0</div><div>MatLoad 1 1.0 1.1575e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 3 0 0 0 0 3 0 0 0 0 0</div><div>MatView 1 1.0 8.0877e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 2 0 0 0 0 2 0 0 0 0 0</div><div>MatCUSPCopyTo 1 1.0 2.4664e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 5 0 0 0 0 5 0 0 0 0 0</div><div>VecSet 68 1.0 5.1665e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0</div><div>VecAssemblyBegin 17 1.0 5.2691e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0</div><div>VecAssemblyEnd 17 1.0 4.3631e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0</div><div>VecScatterBegin 15 1.0 1.5345e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0</div><div>VecCUSPCopyFrom 1 1.0 1.1199e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0</div><div>KSPSetUp 1 1.0 5.1929e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0</div><div>PCSetUp 2 1.0 8.6590e-02 1.0 3.49e+07 1.0 0.0e+00 0.0e+00 0.0e+00 2100 0 0 0 2100 0 0 0 403</div><div>------------------------------<wbr>------------------------------<wbr>------------------------------<wbr>------------------------------</div><div><br></div><div>Memory usage is given in bytes:</div><div><br></div><div>Object Type Creations Destructions Memory Descendants' Mem.</div><div>Reports information only for process 0.</div><div><br></div><div>--- Event Stage 0: Main Stage</div><div><br></div><div> Matrix 3 1 52856972 0.</div><div> Matrix Null Space 1 1 608 0.</div><div> Vector 66 3 3414600 0.</div><div> Vector Scatter 1 1 680 0.</div><div> Viewer 3 2 1680 0.</div><div> Krylov Solver 1 0 0 0.</div><div> Preconditioner 2 1 864 0.</div><div> Index Set 4 1 800 0.</div><div>==============================<wbr>==============================<wbr>==============================<wbr>==============================</div><div>Average time to get PetscTime(): 9.53674e-08</div><div>#PETSc Option Table entries:</div><div>-ksp_view</div><div>-log_view</div><div>-mat_type aijcusparse</div><div>-matload_block_size 1</div><div>-vec_type cusp</div><div>#End of PETSc Option Table entries</div><div>Compiled without FORTRAN kernels</div><div>Compiled with full precision matrices (default)</div><div>sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4</div><div>Configure options: PETSC_ARCH=cuda --with-cc=mpicc --with-cxx=mpic++ --with-fc=mpifort --COPTFLAGS=-O3 --CXXOPTFLAGS=-O3 --FOPTFLAGS=-O3 --with-shared-libraries=1 --with-debugging=1 --with-cuda=1 --with-cuda-arch=sm_60 --with-cusp=1 --with-cusp-dir=/home/valera/c<wbr>usp --with-vienacl=1 --download-fblaslapack=1 --download-hypre</div><div>------------------------------<wbr>-----------</div><div>Libraries compiled on Mon Mar 5 16:37:18 2018 on node50 </div><div>Machine characteristics: Linux-3.10.0-693.17.1.el7.x86_<wbr>64-x86_64-with-centos-7.2.1511<wbr>-Core</div><div>Using PETSc directory: /home/valera/petsc</div><div>Using PETSc arch: cuda</div><div>------------------------------<wbr>-----------</div><div><br></div><div>Using C compiler: mpicc -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -O3 </div><div>Using Fortran compiler: mpifort -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -O3 </div><div>------------------------------<wbr>-----------</div><div><br></div><div>Using include paths: -I/home/valera/petsc/cuda/incl<wbr>ude -I/home/valera/petsc/include -I/home/valera/petsc/include -I/home/valera/petsc/cuda/incl<wbr>ude -I/home/valera/cusp/ -I/usr/local/cuda/include</div><div>------------------------------<wbr>-----------</div><div><br></div><div>Using C linker: mpicc</div><div>Using Fortran linker: mpifort</div><div>Using libraries: -Wl,-rpath,/home/valera/petsc/<wbr>cuda/lib -L/home/valera/petsc/cuda/lib -lpetsc -Wl,-rpath,/home/valera/petsc/<wbr>cuda/lib -L/home/valera/petsc/cuda/lib -Wl,-rpath,/usr/local/cuda/lib<wbr>64 -L/usr/local/cuda/lib64 -Wl,-rpath,/usr/lib64/openmpi/<wbr>lib -L/usr/lib64/openmpi/lib -Wl,-rpath,/usr/lib/gcc/x86_64<wbr>-redhat-linux/4.8.5 -L/usr/lib/gcc/x86_64-redhat-l<wbr>inux/4.8.5 -lHYPRE -lflapack -lfblas -lm -lcufft -lcublas -lcudart -lcusparse -lX11 -lstdc++ -ldl -lmpi_usempi -lmpi_mpifh -lmpi -lgfortran -lm -lgfortran -lm -lgcc_s -lquadmath -lpthread -lstdc++ -ldl</div><div>------------------------------<wbr>-----------</div></div><div><br></div><div><br></div><div><br></div><div>Thanks for your help,</div><div><br></div><div>Manuel </div><div><br></div><div><br></div><div><br></div></div>
</blockquote></div></div></div><span class="HOEnZb"><font color="#888888"><br><br clear="all"><div><br></div>-- <br><div class="m_6575046406500791398gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener</div><div><br></div><div><a href="http://www.caam.rice.edu/~mk51/" target="_blank">https://www.cse.buffalo.edu/~<wbr>knepley/</a><br></div></div></div></div></div>
</font></span></div></div>
</blockquote></div><br></div>