<div dir="ltr"><div>Also, this was the output before the error message:<br><br>Level 0 domain size (m)    1e+04 x    1e+04 x    1e+03, num elements 5 x 5 x 3 (75), size (m) 2000. x 2000. x 500.<br>Level -1 domain size (m)    1e+04 x    1e+04 x    1e+03, num elements 2 x 2 x 2 (8), size (m) 5000. x 5000. x 1000.<br>Level -2 domain size (m)    1e+04 x    1e+04 x    1e+03, num elements 1 x 1 x 1 (1), size (m) 10000. x 10000. x inf.<br><br></div>Which tells me '-da_refine 4' is not registering<br></div><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Mar 30, 2017 at 4:15 PM, Justin Chang <span dir="ltr"><<a href="mailto:jychang48@gmail.com" target="_blank">jychang48@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><div>Okay I'll give it a shot.<br><br></div>Somewhat unrelated, but I tried running this on Cori's Haswell node (loaded the module 'petsc/3.7.4-64'). But I get these errors:<br><br>[0]PETSC ERROR: --------------------- Error Message ------------------------------<wbr>------------------------------<wbr>--<br>[0]PETSC ERROR: Argument out of range<br>[0]PETSC ERROR: Partition in y direction is too fine! 0 1<br>[0]PETSC ERROR: See <a href="http://www.mcs.anl.gov/petsc/documentation/faq.html" target="_blank">http://www.mcs.anl.gov/petsc/<wbr>documentation/faq.html</a> for trouble shooting.<br>[0]PETSC ERROR: Petsc Release Version 3.7.4, Oct, 02, 2016 <br>[0]PETSC ERROR: /global/u1/j/jychang/Icesheet/<wbr>./ex48 on a arch-cori-opt64-INTEL-3.7.4-64 named nid00020 by jychang Thu Mar 30 14:04:35 2017<br>[0]PETSC ERROR: Configure options --known-sizeof-void-p=8 --known-sizeof-short=2 --known-sizeof-int=4 --known-sizeof-long=8 --known-sizeof-long-long=8 --known-sizeof-float=4 --known-sizeof-size_t=8 --known-mpi-int64_t=1 --known-has-attribute-aligned=<wbr>1 --prefix=/global/common/cori/<wbr>software/petsc/3.7.4-64/hsw/<wbr>intel PETSC_ARCH=arch-cori-opt64-<wbr>INTEL-3.7.4-64 --COPTFLAGS="-mkl -O2 -no-ipo -g -axMIC-AVX512,CORE-AVX2,AVX" --CXXOPTFLAGS="-mkl -O2 -no-ipo -g -axMIC-AVX512,CORE-AVX2,AVX" --FOPTFLAGS="-mkl -O2 -no-ipo -g -axMIC-AVX512,CORE-AVX2,AVX" --with-hdf5-dir=/opt/cray/pe/<wbr>hdf5-parallel/1.8.16/INTEL/15.<wbr>0 --with-hwloc-dir=/global/<wbr>common/cori/software/hwloc/1.<wbr>11.4/hsw --with-scalapack-include=/opt/<wbr>intel/compilers_and_libraries_<wbr>2017.1.132/linux/mkl/include --with-scalapack-lib= --LIBS="-mkl -L/global/common/cori/<wbr>software/petsc/3.7.4-64/hsw/<wbr>intel/lib -I/global/common/cori/<wbr>software/petsc/3.7.4-64/hsw/<wbr>intel/include -L/global/common/cori/<wbr>software/xz/5.2.2/hsw/lib -I/global/common/cori/<wbr>software/xz/5.2.2/hsw/include -L/global/common/cori/<wbr>software/zlib/1.2.8/hsw/intel/<wbr>lib -I/global/common/cori/<wbr>software/zlib/1.2.8/hsw/intel/<wbr>include -L/global/common/cori/<wbr>software/libxml2/2.9.4/hsw/lib -I/global/common/cori/<wbr>software/libxml2/2.9.4/hsw/<wbr>include -L/global/common/cori/<wbr>software/numactl/2.0.11/hsw/<wbr>lib -I/global/common/cori/<wbr>software/numactl/2.0.11/hsw/<wbr>include -L/global/common/cori/<wbr>software/hwloc/1.11.4/hsw/lib -I/global/common/cori/<wbr>software/hwloc/1.11.4/hsw/<wbr>include -L/global/common/cori/<wbr>software/openssl/1.1.0a/hsw/<wbr>lib -I/global/common/cori/<wbr>software/openssl/1.1.0a/hsw/<wbr>include -L/global/common/cori/<wbr>software/subversion/1.9.4/hsw/<wbr>lib -I/global/common/cori/<wbr>software/subversion/1.9.4/hsw/<wbr>include -lhwloc -lpciaccess -lxml2 -lz -llzma -Wl,--start-group /opt/intel/compilers_and_<wbr>libraries_2017.1.132/linux/<wbr>mkl/lib/intel64/libmkl_<wbr>scalapack_lp64.a /opt/intel/compilers_and_<wbr>libraries_2017.1.132/linux/<wbr>mkl/lib/intel64/libmkl_core.a /opt/intel/compilers_and_<wbr>libraries_2017.1.132/linux/<wbr>mkl/lib/intel64/libmkl_intel_<wbr>thread.a /opt/intel/compilers_and_<wbr>libraries_2017.1.132/linux/<wbr>mkl/lib/intel64/libmkl_blacs_<wbr>intelmpi_lp64.a -Wl,--end-group -lstdc++" --download-parmetis --download-metis --with-ssl=0 --with-batch --known-mpi-shared-libraries=0 --with-clib-autodetect=0 --with-cxxlib-autodetect=0 --with-debugging=0 --with-fortranlib-autodetect=0 --with-mpiexec=srun --with-shared-libraries=0 --with-x=0 --known-mpi-int64-t=0 --known-bits-per-byte=8 --known-sdot-returns-double=0 --known-snrm2-returns-double=0 --known-level1-dcache-assoc=0 --known-level1-dcache-<wbr>linesize=32 --known-level1-dcache-size=<wbr>32768 --known-memcmp-ok=1 --known-mpi-c-double-complex=1 --known-mpi-long-double=1 --known-sizeof-MPI_Comm=4 --known-sizeof-MPI_Fint=4 --known-sizeof-char=1 --known-sizeof-double=8 CC=cc MPICC=cc CXX=CC MPICXX=CC FC=ftn F77=ftn F90=ftn MPIF90=ftn MPIF77=ftn CFLAGS=-axMIC-AVX512,CORE-<wbr>AVX2,AVX CXXFLAGS=-axMIC-AVX512,CORE-<wbr>AVX2,AVX FFLAGS=-axMIC-AVX512,CORE-<wbr>AVX2,AVX CC=cc MPICC=cc CXX=CC MPICXX=CC FC=ftn F77=ftn F90=ftn MPIF90=ftn MPIF77=ftn CFLAGS=-fPIC FFLAGS=-fPIC LDFLAGS=-fPIE --download-hypre --with-64-bit-indices<br>[0]PETSC ERROR: #1 DMSetUp_DA_3D() line 298 in /global/cscratch1/sd/swowner/<wbr>sleak/petsc-3.7.4/src/dm/<wbr>impls/da/da3.c<br>[0]PETSC ERROR: #2 DMSetUp_DA() line 27 in /global/cscratch1/sd/swowner/<wbr>sleak/petsc-3.7.4/src/dm/<wbr>impls/da/dareg.c<br>[0]PETSC ERROR: #3 DMSetUp() line 744 in /global/cscratch1/sd/swowner/<wbr>sleak/petsc-3.7.4/src/dm/<wbr>interface/dm.c<br>[0]PETSC ERROR: #4 DMCoarsen_DA() line 1196 in /global/cscratch1/sd/swowner/<wbr>sleak/petsc-3.7.4/src/dm/<wbr>impls/da/da.c<br>[0]PETSC ERROR: #5 DMCoarsen() line 2371 in /global/cscratch1/sd/swowner/<wbr>sleak/petsc-3.7.4/src/dm/<wbr>interface/dm.c<br>[0]PETSC ERROR: #6 PCSetUp_MG() line 616 in /global/cscratch1/sd/swowner/<wbr>sleak/petsc-3.7.4/src/ksp/pc/<wbr>impls/mg/mg.c<br>[0]PETSC ERROR: #7 PCSetUp() line 968 in /global/cscratch1/sd/swowner/<wbr>sleak/petsc-3.7.4/src/ksp/pc/<wbr>interface/precon.c<br>[0]PETSC ERROR: #8 KSPSetUp() line 390 in /global/cscratch1/sd/swowner/<wbr>sleak/petsc-3.7.4/src/ksp/ksp/<wbr>interface/itfunc.c<br>[0]PETSC ERROR: #9 KSPSolve() line 599 in /global/cscratch1/sd/swowner/<wbr>sleak/petsc-3.7.4/src/ksp/ksp/<wbr>interface/itfunc.c<br>[0]PETSC ERROR: #10 SNESSolve_NEWTONLS() line 230 in /global/cscratch1/sd/swowner/<wbr>sleak/petsc-3.7.4/src/snes/<wbr>impls/ls/ls.c<br>[0]PETSC ERROR: #11 SNESSolve() line 4005 in /global/cscratch1/sd/swowner/<wbr>sleak/petsc-3.7.4/src/snes/<wbr>interface/snes.c<br>[0]PETSC ERROR: #12 main() line 1548 in /global/homes/j/jychang/<wbr>Icesheet/ex48.c<br>[0]PETSC ERROR: PETSc Option Table entries:<br>[0]PETSC ERROR: -da_refine 4<br>[0]PETSC ERROR: -ksp_rtol 1e-7<br>[0]PETSC ERROR: -M 5<br>[0]PETSC ERROR: -N 5<br>[0]PETSC ERROR: -P 3<br>[0]PETSC ERROR: -pc_mg_levels 5<br>[0]PETSC ERROR: -pc_type mg<br>[0]PETSC ERROR: -thi_mat_type baij<br>[0]PETSC ERROR: ----------------End of Error Message -------send entire error message to petsc-maint@mcs.anl.gov-------<wbr>---<br>Rank 0 [Thu Mar 30 14:04:35 2017] [c0-0c0s5n0] application called MPI_Abort(MPI_COMM_WORLD, 63) - process 0<br>srun: error: nid00020: task 0: Aborted<br>srun: Terminating job step 4363145.1z<br><br></div>it seems to me the PETSc from this module is not registering the '-da_refine' entry. This is strange because I have no issue with this on the latest petsc-dev version, anyone know about this error and/or why it happens?<br></div><div class="HOEnZb"><div class="h5"><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Mar 30, 2017 at 3:39 PM, Matthew Knepley <span dir="ltr"><<a href="mailto:knepley@gmail.com" target="_blank">knepley@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><span>On Thu, Mar 30, 2017 at 3:38 PM, Justin Chang <span dir="ltr"><<a href="mailto:jychang48@gmail.com" target="_blank">jychang48@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Okay, got it. What are the options for setting GAMG as the coarse solver?</div></blockquote><div><br></div></span><div>-mg_coarse_pc_type gamg I think</div><div><div class="m_8496017602071215039h5"><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="gmail_extra"><div class="gmail_quote">On Thu, Mar 30, 2017 at 3:37 PM, Matthew Knepley <span dir="ltr"><<a href="mailto:knepley@gmail.com" target="_blank">knepley@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><span>On Thu, Mar 30, 2017 at 3:04 PM, Justin Chang <span dir="ltr"><<a href="mailto:jychang48@gmail.com" target="_blank">jychang48@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Yeah based on my experiments it seems setting pc_mg_levels to $DAREFINE + 1 has decent performance. <div><br></div><div>1) is there ever a case where you'd want $MGLEVELS <= $DAREFINE? In some of the PETSc tutorial slides (e.g., <a href="http://www.mcs.anl.gov/petsc/documentation/tutorials/TutorialCEMRACS2016.pdf" target="_blank">http://www.mcs.anl.gov/<wbr>petsc/documentation/tutorials/<wbr>TutorialCEMRACS2016.pdf</a> on slide 203/227) they say to use $MGLEVELS = 4 and $DAREFINE = 5, but when I ran this, it was almost twice as slow as if $MGLEVELS >= $DAREFINE</div></div></blockquote><div><br></div></span><div>Depending on how big the initial grid is, you may want this. There is a balance between coarse grid and fine grid work.</div><span><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div>2) So I understand that one cannot refine further than one grid point in each direction, but is there any benefit to having $MGLEVELS > $DAREFINE by a lot?</div></div></blockquote><div><br></div></span><div>Again, it depends on the size of the initial grid.</div><div><br></div><div>On really large problems, you want to use GAMG as the coarse solver, which will move the problem onto a smaller number of nodes</div><div>so that you can coarsen further.</div><div><br></div><div>   Matt</div><span><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div>Thanks,</div><div>Justin</div></div><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Mar 30, 2017 at 2:35 PM, Barry Smith <span dir="ltr"><<a href="mailto:bsmith@mcs.anl.gov" target="_blank">bsmith@mcs.anl.gov</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><br>
  -da_refine $DAREFINE  determines how large the final problem will be.<br>
<br>
  By default if you don't supply pc_mg_levels then it uses $DAREFINE + 1 as the number of levels of MG to use; for example -da_refine 1 would result in 2 levels of multigrid.<br>
<div class="m_8496017602071215039m_3211230647724409430m_6839945993956575678m_3155608469020584523m_-6704646030448738838HOEnZb"><div class="m_8496017602071215039m_3211230647724409430m_6839945993956575678m_3155608469020584523m_-6704646030448738838h5"><br>
<br>
> On Mar 30, 2017, at 2:17 PM, Justin Chang <<a href="mailto:jychang48@gmail.com" target="_blank">jychang48@gmail.com</a>> wrote:<br>
><br>
> Hi all,<br>
><br>
> Just a general conceptual question: say I am tinkering around with SNES ex48.c and am running the program with these options:<br>
><br>
> mpirun -n $NPROCS -pc_type mg -M $XSEED -N $YSEED -P $ZSEED -thi_mat_type baij -da_refine $DAREFINE -pc_mg_levels $MGLEVELS<br>
><br>
> I am not too familiar with mg, but it seems to me there is a very strong correlation between $MGLEVELS and $DAREFINE as well as perhaps even the initial coarse grid size (provided by $X/YZSEED).<br>
><br>
> Is there a rule of thumb on how these parameters should be? I am guessing it probably is also hardware/architectural dependent?<br>
><br>
> Thanks,<br>
> Justin<br>
<br>
</div></div></blockquote></div><br></div>
</blockquote></span></div><br><br clear="all"><span class="m_8496017602071215039m_3211230647724409430HOEnZb"><font color="#888888"><span><div><br></div>-- <br><div class="m_8496017602071215039m_3211230647724409430m_6839945993956575678m_3155608469020584523gmail_signature" data-smartmail="gmail_signature">What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener</div>
</span></font></span></div></div>
</blockquote></div><br></div>
</blockquote></div></div></div><div><div class="m_8496017602071215039h5"><br><br clear="all"><div><br></div>-- <br><div class="m_8496017602071215039m_3211230647724409430gmail_signature" data-smartmail="gmail_signature">What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener</div>
</div></div></div></div>
</blockquote></div><br></div>
</div></div></blockquote></div><br></div>