<div dir="ltr">Back to SuperLU + GPUs (adding Sherry)<div><br></div><div>I get this error (appended) running 'check', as I said before. It looks like ex19 is <b><font color="#ff0000">failing</font></b> with CUDA but it is not clear it has anything to do with SuperLU. I can not find these diagnostics that got printed after the error in PETSc or SuperLU.</div><div><br></div><div>So this is a problem, but moving on to my code (plex/ex11 in mark/feature-xgc-interface-rebase-v2, configure script appended). It runs. I use superlu and GPUs, but they do not seem to be used in SuperLU:</div><div><font face="monospace" color="#000000"><br></font></div><div><font face="monospace" color="#000000">------------------------------------------------------------------------------------------------------------------------<br>Event Count Time (sec) Flop --- Global --- --- Stage ---- Total GPU - CpuToGpu - - GpuToCpu - GPU<br> Max Ratio Max Ratio Max Ratio Mess AvgLen Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s Mflop/s Count Size Count Size %F<br>---------------------------------------------------------------------------------------------------------------------------------------------------------------<br></font></div><div><font face="monospace" color="#000000"> ....</font></div><div><font face="monospace" color="#000000">MatLUFactorNum 12 1.0 <b>2.3416e+01</b> 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 31 0 0 0 0 31 0 0 0 0 0 0 <b>0 0.00e+00 0 0.00e+00 0</b><br></font></div><div><br></div><font color="#000000">No CUDA version. The times are the same and no GPU communication above. So SuperLU does not seem to be using GPUs.</font><div><font color="#000000"><br></font><div><font face="monospace" color="#000000">------------------------------------------------------------------------------------------------------------------------<br>Event Count Time (sec) Flop --- Global --- --- Stage ---- Total<br> Max Ratio Max Ratio Max Ratio Mess AvgLen Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s<br>------------------------------------------------------------------------------------------------------------------------ </font></div><div><font color="#000000"><font face="monospace"> ....<br>MatLUFactorNum 12 1.0 <b>2.3421e+01</b> 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 5 0 0 0 0 5 0 0 0 0 0</font><br></font></div><div><br></div>There are some differences: ex19 use DMDA and I use DMPlex, 'check' is run in my home directory, where files can not be written, and I run my code in the project areas.<br><br>The timings are different without superlu so I think superlu is being used. THis is how I run this (w and w/o -mat_superlu_equil -dm_mat_type sell)</div><div><br>jsrun -n 1 -a 1 -c 2 -g 1 ./ex113d_no_cuda -dim 3 -dm_view hdf5:re33d.h5 -vec_view hdf5:re33d.h5::append -test_type spitzer -Ez 0 -petscspace_degree 2 -mass_petscspace_degree 2 -petscspace_poly_tensor 1 -mass_petscspace_poly_tensor 1 -dm_type p8est -ion_masses 4 -ion_charges 2 -thermal_temps 4,4 -n 1,.5 -n_0 1e20 -ts_monitor -ts_adapt_monitor -snes_rtol 1.e-6 -snes_stol 1.e-9 -snes_monitor -snes_converged_reason -snes_max_it 15 -ts_type arkimex -ts_exact_final_time stepover -ts_arkimex_type 1bee -ts_max_snes_failures -1 -ts_rtol 1e-3 -ts_dt 1e-1 -ts_adapt_clip .25,1.05 -ts_adapt_dt_max 10 -ts_adapt_dt_min 2e-2 -ts_max_time 3200 -ts_max_steps 1 -ts_adapt_scale_solve_failed 0.75 -ts_adapt_time_step_increase_delay 5 -pc_type lu -ksp_type preonly -amr_levels_max 11 -amr_re_levels 0 -amr_z_refine1 0 -amr_z_refine2 0 -amr_post_refine 0 -domain_radius -.95 -re_radius 4 -z_radius1 8 -z_radius2 .1 -plot_dt .10 -impurity_source_type pulse -pulse_start_time 2600 -pulse_width_time 100 -pulse_rate 1e+0 -t_cold .005 -info :dm,tsadapt: -sub_thread_block_size 4 -options_left -log_view -pc_factor_mat_solver_type superlu -mat_superlu_equil -dm_mat_type sell<br><br>So there is a bug in ex19 on SUMMIT and I am not getting GPUs turned on in SuperLU.</div><div>Thoughts?<br><br>Thanks,<div>Mark<br><br><div>09:28 mark/feature-xgc-interface-rebase-v2 *= ~/petsc$ make PETSC_DIR=/ccs/home/adams/petsc PETSC_ARCH=arch-summit-opt-gnu-cuda-omp check<br>Running check examples to verify correct installation<br>Using PETSC_DIR=/ccs/home/adams/petsc and PETSC_ARCH=arch-summit-opt-gnu-cuda-omp<br>C/C++ example src/snes/tutorials/ex19 run successfully with 1 MPI process<br>C/C++ example src/snes/tutorials/ex19 run successfully with 2 MPI processes<br>2c2,39<br>< Number of SNES iterations = 2<br>---<br><font color="#ff0000"><b>> ex19: cudahook.cc:762: CUresult host_free_callback(void*): Assertion `cacheNode != __null' failed.<br></b></font>> [h50n09:102287] *** Process received signal ***<br>> CUDA version: v 10010<br>> CUDA Devices:<br>><br>> 0 : Tesla V100-SXM2-16GB 7 0<br>> Global memory: 16128 mb<br>> Shared memory: 48 kb<br>> Constant memory: 64 kb<br>> Block registers: 65536<br>><br>> [h50n09:102287] Signal: Aborted (6)<br>> [h50n09:102287] Associated errno: Unknown error 1072693248 (1072693248)<br>> [h50n09:102287] Signal code: User function (kill, sigsend, abort, etc.) (0)<br>> [h50n09:102287] [ 0] [0x2000000504d8]<br>> [h50n09:102287] [ 1] /lib64/libc.so.6(abort+0x2b4)[0x200021bf2094]<br>> [h50n09:102287] [ 2] /lib64/libc.so.6(+0x356d4)[0x200021be56d4]<br>> [h50n09:102287] [ 3] /lib64/libc.so.6(__assert_fail+0x64)[0x200021be57c4]<br>> [h50n09:102287] [ 4] /autofs/nccs-svm1_sw/summit/.swci/1-compute/opt/spack/20180914/linux-rhel7-ppc64le/gcc-6.4.0/spectrum-mpi-10.3.1.2-20200121-awz2q5brde7wgdqqw4ugalrkukeub4eb/container/../lib/libpami_cudahook.so(_Z18host_free_callbackPv+0x2d8)[0x2000000cd2c8]<br>> [h50n09:102287] [ 5] /autofs/nccs-svm1_sw/summit/.swci/1-compute/opt/spack/20180914/linux-rhel7-ppc64le/gcc-6.4.0/spectrum-mpi-10.3.1.2-20200121-awz2q5brde7wgdqqw4ugalrkukeub4eb/container/../lib/libpami_cudahook.so(cuMemFreeHost+0xb0)[0x2000000c3cc0]<br>> [h50n09:102287] [ 6] /sw/summit/cuda/10.1.243/lib64/libcudart.so.10.1(+0x42f50)[0x20000ed02f50]<br>> [h50n09:102287] [ 7] /sw/summit/cuda/10.1.243/lib64/libcudart.so.10.1(+0x11db8)[0x20000ecd1db8]<br>> [h50n09:102287] [ 8] /sw/summit/cuda/10.1.243/lib64/libcudart.so.10.1(cudaFreeHost+0x74)[0x20000ed12ea4]<br>> [h50n09:102287] [ 9] /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libsuperlu_dist.so.6(dDestroy_LU+0xc4)[0x20000195aff4]<br>> [h50n09:102287] [10] /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(+0x7cdb70)[0x2000008bdb70]<br>> [h50n09:102287] [11] /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(MatLUFactorNumeric+0x1ec)[0x2000005f1a8c]<br>> [h50n09:102287] [12] /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(+0xbf8270)[0x200000ce8270]<br>> [h50n09:102287] [13] /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(PCSetUp+0x1a4)[0x200000d8d5a4]<br>> [h50n09:102287] [14] /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(KSPSetUp+0x40c)[0x200000dc498c]<br>> [h50n09:102287] [15] /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(+0xcd56fc)[0x200000dc56fc]<br>> [h50n09:102287] [16] /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(KSPSolve+0x20)[0x200000dc8260]<br>> [h50n09:102287] [17] /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(+0xe0a170)[0x200000efa170]<br>> [h50n09:102287] [18] /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(SNESSolve+0x814)[0x200000ebd394]<br>> [h50n09:102287] [19] ./ex19[0x10001a6c]<br>> [h50n09:102287] [20] /lib64/libc.so.6(+0x25200)[0x200021bd5200]<br>> [h50n09:102287] [21] /lib64/libc.so.6(__libc_start_main+0xc4)[0x200021bd53f4]<br>> [h50n09:102287] *** End of error message ***<br>> ERROR: One or more process (first noticed rank 0) terminated with signal 6<br>/ccs/home/adams/petsc/src/snes/tutorials<br>Possible problem with ex19 running with superlu_dist, diffs above<br></div><div><br></div></div><div><br></div><div><br></div><div><br></div><div>#!/usr/bin/env python<br>if __name__ == '__main__':<br> import sys<br> import os<br> sys.path.insert(0, os.path.abspath('config'))<br> import configure<br> configure_options = [<br> '--with-fc=0',<br> '--COPTFLAGS=-g -O2 -fPIC -fopenmp',<br> '--CXXOPTFLAGS=-g -O2 -fPIC -fopenmp',<br> '--FOPTFLAGS=-g -O2 -fPIC -fopenmp',<br> '--CUDAOPTFLAGS=-O2 -g',<br> '--with-ssl=0',<br> '--with-batch=0',<br> '--with-cxx=mpicxx',<br> '--with-mpiexec=jsrun -g1',<br> '--with-cuda=1',<br> '--with-cudac=nvcc',<br> '--download-p4est=1',<br> '--download-zlib',<br> '--download-hdf5=1',<br> '--download-metis',<br> '--download-superlu',<br> '--download-superlu_dist',<br> '--with-make-np=16',<br> # '--with-hwloc=0',<br> '--download-parmetis',<br> # '--download-hypre',<br> '--download-triangle',<br> # '--download-amgx',<br> # '--download-fblaslapack',<br> '--with-blaslapack-lib=-L' + os.environ['OLCF_NETLIB_LAPACK_ROOT'] + '/lib64 -lblas -llapack',<br> '--with-cc=mpicc',<br> # '--with-fc=mpif90',<br> '--with-shared-libraries=1',<br> # '--known-mpi-shared-libraries=1',<br> '--with-x=0',<br> '--with-64-bit-indices=0',<br> '--with-debugging=0',<br> 'PETSC_ARCH=arch-summit-opt-gnu-cuda-omp',<br> '--with-openmp=1',<br> '--with-threadsaftey=1',<br> '--with-log=1'<br> ]<br> configure.petsc_configure(configure_options)<br> <br></div><div><br></div></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, Apr 15, 2020 at 9:58 PM Satish Balay <<a href="mailto:balay@mcs.anl.gov">balay@mcs.anl.gov</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">The crash is inside Superlu_DIST - so don't know what to suggest.<br>
<br>
Might have to debug this via debugger and check with Sherry.<br>
<br>
Satish<br>
<br>
On Wed, 15 Apr 2020, Mark Adams wrote:<br>
<br>
> Ah, OK 'check' will test SuperLU. Semi worked:<br>
> <br>
> s20:13 mark/feature-xgc-interface-rebase *= ~/petsc$ make<br>
> PETSC_DIR=/ccs/home/adams/petsc PETSC_ARCH=arch-summit-dbg-gnu-cuda-omp<br>
> check<br>
> Running check examples to verify correct installation<br>
> Using PETSC_DIR=/ccs/home/adams/petsc and<br>
> PETSC_ARCH=arch-summit-dbg-gnu-cuda-omp<br>
> C/C++ example src/snes/tutorials/ex19 run successfully with 1 MPI process<br>
> C/C++ example src/snes/tutorials/ex19 run successfully with 2 MPI processes<br>
> 2c2,38<br>
> < Number of SNES iterations = 2<br>
> ---<br>
> > CUDA version: v 10010<br>
> > CUDA Devices:<br>
> ><br>
> > 0 : Tesla V100-SXM2-16GB 7 0<br>
> > Global memory: 16128 mb<br>
> > Shared memory: 48 kb<br>
> > Constant memory: 64 kb<br>
> > Block registers: 65536<br>
> ><br>
> > ex19: cudahook.cc:762: CUresult host_free_callback(void*): Assertion<br>
> `cacheNode != __null' failed.<br>
> > [h16n07:78357] *** Process received signal ***<br>
> > [h16n07:78357] Signal: Aborted (6)<br>
> > [h16n07:78357] Signal code: (1704218624)<br>
> > [h16n07:78357] [ 0] [0x2000000504d8]<br>
> > [h16n07:78357] [ 1] /lib64/libc.so.6(abort+0x2b4)[0x200023992094]<br>
> > [h16n07:78357] [ 2] /lib64/libc.so.6(+0x356d4)[0x2000239856d4]<br>
> > [h16n07:78357] [ 3] /lib64/libc.so.6(__assert_fail+0x64)[0x2000239857c4]<br>
> > [h16n07:78357] [ 4]<br>
> /autofs/nccs-svm1_sw/summit/.swci/1-compute/opt/spack/20180914/linux-rhel7-ppc64le/gcc-6.4.0/spectrum-mpi-10.3.1.2-20200121-awz2q5brde7wgdqqw4ugalrkukeub4eb/container/../lib/libpami_cudahook.so(_Z18host_free_callbackPv+0x2d8)[0x2000000cd2c8]<br>
> > [h16n07:78357] [ 5]<br>
> /autofs/nccs-svm1_sw/summit/.swci/1-compute/opt/spack/20180914/linux-rhel7-ppc64le/gcc-6.4.0/spectrum-mpi-10.3.1.2-20200121-awz2q5brde7wgdqqw4ugalrkukeub4eb/container/../lib/libpami_cudahook.so(cuMemFreeHost+0xb0)[0x2000000c3cc0]<br>
> > [h16n07:78357] [ 6]<br>
> /sw/summit/cuda/10.1.243/lib64/libcudart.so.10.1(+0x42f50)[0x200010aa2f50]<br>
> > [h16n07:78357] [ 7]<br>
> /sw/summit/cuda/10.1.243/lib64/libcudart.so.10.1(+0x11db8)[0x200010a71db8]<br>
> > [h16n07:78357] [ 8]<br>
> /sw/summit/cuda/10.1.243/lib64/libcudart.so.10.1(cudaFreeHost+0x74)[0x200010ab2ea4]<br>
> > [h16n07:78357] [ 9]<br>
> /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libsuperlu_dist.so.6(dDestroy_LU+0x150)[0x200003188058]<br>
> > [h16n07:78357] [10]<br>
> /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(+0x12ebc6c)[0x2000013dbc6c]<br>
> > [h16n07:78357] [11]<br>
> /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(MatLUFactorNumeric+0x934)[0x200000d2fae4]<br>
> > [h16n07:78357] [12]<br>
> /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(+0x1cca7a4)[0x200001dba7a4]<br>
> > [h16n07:78357] [13]<br>
> /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(PCSetUp+0xde0)[0x200001f3f990]<br>
> > [h16n07:78357] [14]<br>
> /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(KSPSetUp+0x1848)[0x200001fc5594]<br>
> > [h16n07:78357] [15]<br>
> /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(+0x1ed9908)[0x200001fc9908]<br>
> > [h16n07:78357] [16]<br>
> /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(KSPSolve+0x5d0)[0x200001fcc690]<br>
> > [h16n07:78357] [17]<br>
> /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(+0x21e16ac)[0x2000022d16ac]<br>
> > [h16n07:78357] [18]<br>
> /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(SNESSolve+0x23f4)[0x2000022255c0]<br>
> > [h16n07:78357] [19] ./ex19[0x10002ac8]<br>
> > [h16n07:78357] [20] /lib64/libc.so.6(+0x25200)[0x200023975200]<br>
> > [h16n07:78357] [21]<br>
> /lib64/libc.so.6(__libc_start_main+0xc4)[0x2000239753f4]<br>
> > [h16n07:78357] *** End of error message ***<br>
> > ERROR: One or more process (first noticed rank 0) terminated with signal<br>
> 6<br>
> /ccs/home/adams/petsc/src/snes/tutorials<br>
> Possible problem with ex19 running with superlu_dist, diffs above<br>
> =========================================<br>
> <br>
> On Wed, Apr 15, 2020 at 5:58 PM Satish Balay <<a href="mailto:balay@mcs.anl.gov" target="_blank">balay@mcs.anl.gov</a>> wrote:<br>
> <br>
> > Please send configure.log<br>
> ><br>
> > This is what I get on my linux build:<br>
> ><br>
> > [balay@p1 petsc]$ ./configure<br>
> > --with-mpi-dir=/home/petsc/soft/openmpi-4.0.2-cuda --with-cuda=1<br>
> > --with-openmp=1 --download-superlu-dist=1 && make && make check<br>
> > <snip><br>
> > Running check examples to verify correct installation<br>
> > Using PETSC_DIR=/home/balay/petsc and PETSC_ARCH=arch-linux-c-debug<br>
> > C/C++ example src/snes/tutorials/ex19 run successfully with 1 MPI process<br>
> > C/C++ example src/snes/tutorials/ex19 run successfully with 2 MPI processes<br>
> > 1a2,19<br>
> > > CUDA version: v 10020<br>
> > > CUDA Devices:<br>
> > ><br>
> > > 0 : Quadro T2000 7 5<br>
> > > Global memory: 3911 mb<br>
> > > Shared memory: 48 kb<br>
> > > Constant memory: 64 kb<br>
> > > Block registers: 65536<br>
> > ><br>
> > > CUDA version: v 10020<br>
> > > CUDA Devices:<br>
> > ><br>
> > > 0 : Quadro T2000 7 5<br>
> > > Global memory: 3911 mb<br>
> > > Shared memory: 48 kb<br>
> > > Constant memory: 64 kb<br>
> > > Block registers: 65536<br>
> > ><br>
> > /home/balay/petsc/src/snes/tutorials<br>
> > Possible problem with ex19 running with superlu_dist, diffs above<br>
> > =========================================<br>
> > Fortran example src/snes/tutorials/ex5f run successfully with 1 MPI process<br>
> > Completed test examples<br>
> ><br>
> ><br>
> > On Wed, 15 Apr 2020, Mark Adams wrote:<br>
> ><br>
> > > On Wed, Apr 15, 2020 at 5:17 PM Satish Balay <<a href="mailto:balay@mcs.anl.gov" target="_blank">balay@mcs.anl.gov</a>> wrote:<br>
> > ><br>
> > > > The build should work. It should give some verbose info [at runtime]<br>
> > > > regarding GPUs - from the following code.<br>
> > > ><br>
> > > ><br>
> > > I don't see that and I am running GPUs in my code and have gotten<br>
> > cusparse<br>
> > > LU to run. Should I use '-info :sys:' ?<br>
> > ><br>
> > ><br>
> > > > >>>>> SRC/cublas_utils.c >>>>>>>>>>><br>
> > > > void DisplayHeader()<br>
> > > > {<br>
> > > > const int kb = 1024;<br>
> > > > const int mb = kb * kb;<br>
> > > > // cout << "NBody.GPU" << endl << "=========" << endl << endl;<br>
> > > ><br>
> > > > printf("CUDA version: v %d\n",CUDART_VERSION);<br>
> > > > //cout << "Thrust version: v" << THRUST_MAJOR_VERSION << "." <<<br>
> > > > THRUST_MINOR_VERSION << endl << endl;<br>
> > > ><br>
> > > > int devCount;<br>
> > > > cudaGetDeviceCount(&devCount);<br>
> > > > printf( "CUDA Devices: \n \n");<br>
> > > > <snip><br>
> > > > <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<br>
> > > ><br>
> > > > Satish<br>
> > > ><br>
> > > > On Wed, 15 Apr 2020, Junchao Zhang wrote:<br>
> > > ><br>
> > > > > I remember Barry said superlu gpu support is broken.<br>
> > > > > --Junchao Zhang<br>
> > > > ><br>
> > > > ><br>
> > > > > On Wed, Apr 15, 2020 at 3:47 PM Mark Adams <<a href="mailto:mfadams@lbl.gov" target="_blank">mfadams@lbl.gov</a>> wrote:<br>
> > > > ><br>
> > > > > > How does one use SuperLU with GPUs. I don't seem to get any GPU<br>
> > > > > > performance data so I assume GPUs are not getting turned on. Am I<br>
> > wrong<br>
> > > > > > about that?<br>
> > > > > ><br>
> > > > > > I configure with:<br>
> > > > > > configure options: --with-fc=0 --COPTFLAGS="-g -O2 -fPIC -fopenmp"<br>
> > > > > > --CXXOPTFLAGS="-g -O2 -fPIC -fopenmp" --FOPTFLAGS="-g -O2 -fPIC<br>
> > > > -fopenmp"<br>
> > > > > > --CUDAOPTFLAGS="-O2 -g" --with-ssl=0 --with-batch=0<br>
> > --with-cxx=mpicxx<br>
> > > > > > --with-mpiexec="jsrun -g1" --with-cuda=1 --with-cudac=nvcc<br>
> > > > > > --download-p4est=1 --download-zlib --download-hdf5=1<br>
> > --download-metis<br>
> > > > > > --download-superlu --download-superlu_dist --with-make-np=16<br>
> > > > > > --download-parmetis --download-triangle<br>
> > > > > ><br>
> > > ><br>
> > --with-blaslapack-lib="-L/autofs/nccs-svm1_sw/summit/.swci/1-compute/opt/spack/20180914/linux-rhel7-ppc64le/gcc-6.4.0/netlib-lapack-3.8.0-wcabdyqhdi5rooxbkqa6x5d7hxyxwdkm/lib64<br>
> > > > > > -lblas -llapack" --with-cc=mpicc --with-shared-libraries=1<br>
> > --with-x=0<br>
> > > > > > --with-64-bit-indices=0 --with-debugging=0<br>
> > > > > > PETSC_ARCH=arch-summit-opt-gnu-cuda-omp --with-openmp=1<br>
> > > > > > --with-threadsaftey=1 --with-log=1<br>
> > > > > ><br>
> > > > > > Thanks,<br>
> > > > > > Mark<br>
> > > > > ><br>
> > > > ><br>
> > > ><br>
> > > ><br>
> > ><br>
> ><br>
> ><br>
> <br>
<br>
</blockquote></div>