<html><head><meta http-equiv="Content-Type" content="text/html charset=utf-8"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" class="">By the way, I tried using a different petsc installation, and now, rather than the segmentation fault, I get the following error:<div class=""><br class=""></div><div class=""><div style="margin: 0px; font-family: Menlo; color: rgb(195, 55, 32);" class=""><b class="">[0]PETSC ERROR: --------------------- Error Message --------------------------------------------------------------</b></div><div style="margin: 0px; font-family: Menlo;" class="">[0]PETSC ERROR: No support for this operation for this object type</div><div style="margin: 0px; font-family: Menlo;" class="">[0]PETSC ERROR: Mat type mffd</div><div style="margin: 0px; font-family: Menlo;" class="">[0]PETSC ERROR: See <a href="http://www.mcs.anl.gov/petsc/documentation/faq.html" class="">http://www.mcs.anl.gov/petsc/documentation/faq.html</a> for trouble shooting.</div><div style="margin: 0px; font-family: Menlo;" class="">[0]PETSC ERROR: Petsc Release Version 3.5.4, May, 23, 2015 </div><div style="margin: 0px; font-family: Menlo;" class="">[0]PETSC ERROR: ./blowup_batch_refine on a arch-darwin-c-debug named gs_air by gideon Mon Sep 7 21:32:18 2015</div><div style="margin: 0px; font-family: Menlo;" class="">[0]PETSC ERROR: Configure options --download-mpich=yes --download-suitesparse=yes --download-superlu=yes --download-superlu_dist=yes --download-mumps=yes --download-sprng=yes --with-cxx=clang++ --with-cc=clang --with-fc=gfortran --download-metis=yes --download-parmetis=yes --download-scalapack=yes</div><div style="margin: 0px; font-family: Menlo;" class="">[0]PETSC ERROR: #3892 MatSetValues() line 1116 in /opt/petsc-3.5.4/src/mat/interface/matrix.c</div><div class=""><br class=""></div><div class="">
<span class="Apple-style-span" style="border-collapse: separate; border-spacing: 0px;">-gideon</span>
</div>
<br class=""><div style=""><blockquote type="cite" class=""><div class="">On Sep 7, 2015, at 9:22 PM, Barry Smith <<a href="mailto:bsmith@mcs.anl.gov" class="">bsmith@mcs.anl.gov</a>> wrote:</div><br class="Apple-interchange-newline"><div class=""><br class="">Hmm,<br class=""><br class=""> Ok you can try running it directly in the debugger since it is one process, type<br class=""><br class=""> gdb ./blowup_batch_refine<br class=""><br class=""> then <br class=""><br class=""> when the debugger comes up (if it does not cut and paste all output and send it)<br class=""><br class=""> run -on_error_abort -snes_mf_operator and any other options you normally use<br class=""><br class=""><br class=""> Barry<br class=""><br class=""><blockquote type="cite" class="">On Sep 7, 2015, at 8:18 PM, Gideon Simpson <<a href="mailto:gideon.simpson@gmail.com" class="">gideon.simpson@gmail.com</a>> wrote:<br class=""><br class="">Running with that flag gives me this:<br class=""><br class="">[0]PETSC ERROR: PETSC: Attaching gdb to ./blowup_batch_refine of pid 16111 on gs_air<br class="">Unable to start debugger: No such file or directory<br class=""><br class=""><br class=""><br class="">-gideon<br class=""><br class=""><blockquote type="cite" class="">On Sep 7, 2015, at 9:11 PM, Barry Smith <<a href="mailto:bsmith@mcs.anl.gov" class="">bsmith@mcs.anl.gov</a>> wrote:<br class=""><br class=""><br class=""> This should not happen. Run with a debug version of PETSc installed and the option -start_in_debugger noxterm Once the debugger starts up type cont and when it crashes type where or bt Send all output<br class=""><br class=""><br class=""><br class=""> Barry<br class=""><br class=""><br class=""><blockquote type="cite" class="">On Sep 7, 2015, at 8:09 PM, Gideon Simpson <<a href="mailto:gideon.simpson@gmail.com" class="">gideon.simpson@gmail.com</a>> wrote:<br class=""><br class="">I’m getting an error with -snes_mf_operator, <br class=""><br class=""> 0 SNES Function norm 1.421454390131e-02 <br class="">[0]PETSC ERROR: ------------------------------------------------------------------------<br class="">[0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range<br class="">[0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger<br class="">[0]PETSC ERROR: or see <a href="http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind" class="">http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind</a><br class="">[0]PETSC ERROR: or try <a href="http://valgrind.org" class="">http://valgrind.org</a> on GNU/linux and Apple Mac OS X to find memory corruption errors<br class="">[0]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run <br class="">[0]PETSC ERROR: to get more information on the crash.<br class="">[0]PETSC ERROR: --------------------- Error Message --------------------------------------------------------------<br class="">[0]PETSC ERROR: Signal received<br class="">[0]PETSC ERROR: See <a href="http://www.mcs.anl.gov/petsc/documentation/faq.html" class="">http://www.mcs.anl.gov/petsc/documentation/faq.html</a> for trouble shooting.<br class="">[0]PETSC ERROR: Petsc Release Version 3.5.3, unknown <br class="">[0]PETSC ERROR: ./blowup_batch_refine on a arch-macports named gs_air by gideon Mon Sep 7 21:08:19 2015<br class="">[0]PETSC ERROR: Configure options --prefix=/opt/local --prefix=/opt/local/lib/petsc --with-valgrind=0 --with-shared-libraries --with-debugging=0 --with-c2html-dir=/opt/local --with-x=0 --with-blas-lapack-lib=/System/Library/Frameworks/Accelerate.framework/Versions/Current/Accelerate --with-hwloc-dir=/opt/local --with-suitesparse-dir=/opt/local --with-superlu-dir=/opt/local --with-metis-dir=/opt/local --with-parmetis-dir=/opt/local --with-scalapack-dir=/opt/local --with-mumps-dir=/opt/local --with-superlu_dist-dir=/opt/local CC=/opt/local/bin/mpicc-mpich-mp CXX=/opt/local/bin/mpicxx-mpich-mp FC=/opt/local/bin/mpif90-mpich-mp F77=/opt/local/bin/mpif90-mpich-mp F90=/opt/local/bin/mpif90-mpich-mp COPTFLAGS=-Os CXXOPTFLAGS=-Os FOPTFLAGS=-Os LDFLAGS="-L/opt/local/lib -Wl,-headerpad_max_install_names" CPPFLAGS=-I/opt/local/include CFLAGS="-Os -arch x86_64" CXXFLAGS=-Os FFLAGS=-Os FCFLAGS=-Os F90FLAGS=-Os PETSC_ARCH=arch-macports --with-mpiexec=mpiexec-mpich-mp<br class="">[0]PETSC ERROR: #1 User provided function() line 0 in unknown file<br class="">application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0<br class=""><br class="">-gideon<br class=""><br class=""><blockquote type="cite" class="">On Sep 7, 2015, at 9:01 PM, Barry Smith <<a href="mailto:bsmith@mcs.anl.gov" class="">bsmith@mcs.anl.gov</a>> wrote:<br class=""><br class=""><br class="">My guess is the Jacobian is not correct (or correct "enough"), hence PETSc SNES is generating a poor descent direction. You can try <br class="">-snes_mf_operator -ksp_monitor_true residual as additional arguments. What happens?<br class=""><br class="">Barry<br class=""><br class=""><br class=""><br class=""><blockquote type="cite" class="">On Sep 7, 2015, at 7:49 PM, Gideon Simpson <<a href="mailto:gideon.simpson@gmail.com" class="">gideon.simpson@gmail.com</a>> wrote:<br class=""><br class="">No problem Matt, I don’t think we had previously discussed that output. Here is a case where things fail.<br class=""><br class=""> 0 SNES Function norm 4.027481756921e-09 <br class=""> 1 SNES Function norm 1.760477878365e-12 <br class=""> Nonlinear solve converged due to CONVERGED_SNORM_RELATIVE iterations 1<br class=""> 0 SNES Function norm 5.066222213176e+03 <br class=""> 1 SNES Function norm 8.484697184230e+02 <br class=""> 2 SNES Function norm 6.549559723294e+02 <br class=""> 3 SNES Function norm 5.770723278153e+02 <br class=""> 4 SNES Function norm 5.237702240594e+02 <br class=""> 5 SNES Function norm 4.753909019848e+02 <br class=""> 6 SNES Function norm 4.221784590755e+02 <br class=""> 7 SNES Function norm 3.806525080483e+02 <br class=""> 8 SNES Function norm 3.762054656019e+02 <br class=""> 9 SNES Function norm 3.758975226873e+02 <br class=""> 10 SNES Function norm 3.757032042706e+02 <br class=""> 11 SNES Function norm 3.728798164234e+02 <br class=""> 12 SNES Function norm 3.723078741075e+02 <br class=""> 13 SNES Function norm 3.721848059825e+02 <br class=""> 14 SNES Function norm 3.720227575629e+02 <br class=""> 15 SNES Function norm 3.720051998555e+02 <br class=""> 16 SNES Function norm 3.718945430587e+02 <br class=""> 17 SNES Function norm 3.700412694044e+02 <br class=""> 18 SNES Function norm 3.351964889461e+02 <br class=""> 19 SNES Function norm 3.096016086233e+02 <br class=""> 20 SNES Function norm 3.008410789787e+02 <br class=""> 21 SNES Function norm 2.752316716557e+02 <br class=""> 22 SNES Function norm 2.707658474165e+02 <br class=""> 23 SNES Function norm 2.698436736049e+02 <br class=""> 24 SNES Function norm 2.618233857172e+02 <br class=""> 25 SNES Function norm 2.600121920634e+02 <br class=""> 26 SNES Function norm 2.585046423168e+02 <br class=""> 27 SNES Function norm 2.568551090220e+02 <br class=""> 28 SNES Function norm 2.556404537064e+02 <br class=""> 29 SNES Function norm 2.536353523683e+02 <br class=""> 30 SNES Function norm 2.533596070171e+02 <br class=""> 31 SNES Function norm 2.532324379596e+02 <br class=""> 32 SNES Function norm 2.531842335211e+02 <br class=""> 33 SNES Function norm 2.531684527520e+02 <br class=""> 34 SNES Function norm 2.531637604618e+02 <br class=""> 35 SNES Function norm 2.531624767821e+02 <br class=""> 36 SNES Function norm 2.531621359093e+02 <br class=""> 37 SNES Function norm 2.531620504925e+02 <br class=""> 38 SNES Function norm 2.531620350055e+02 <br class=""> 39 SNES Function norm 2.531620310522e+02 <br class=""> 40 SNES Function norm 2.531620300471e+02 <br class=""> 41 SNES Function norm 2.531620298084e+02 <br class=""> 42 SNES Function norm 2.531620297478e+02 <br class=""> 43 SNES Function norm 2.531620297324e+02 <br class=""> 44 SNES Function norm 2.531620297303e+02 <br class=""> 45 SNES Function norm 2.531620297302e+02 <br class="">Nonlinear solve did not converge due to DIVERGED_LINE_SEARCH iterations 45<br class="">0 SNES Function norm 9.636339304380e+03 <br class="">1 SNES Function norm 8.997731184634e+03 <br class="">2 SNES Function norm 8.120498349232e+03 <br class="">3 SNES Function norm 7.322379894820e+03 <br class="">4 SNES Function norm 6.599581599149e+03 <br class="">5 SNES Function norm 6.374872854688e+03 <br class="">6 SNES Function norm 6.372518007653e+03 <br class="">7 SNES Function norm 6.073996314301e+03 <br class="">8 SNES Function norm 5.635965277054e+03 <br class="">9 SNES Function norm 5.155389064046e+03 <br class="">10 SNES Function norm 5.080567902638e+03 <br class="">11 SNES Function norm 5.058878643969e+03 <br class="">12 SNES Function norm 5.058835649793e+03 <br class="">13 SNES Function norm 5.058491285707e+03 <br class="">14 SNES Function norm 5.057452865337e+03 <br class="">15 SNES Function norm 5.057226140688e+03 <br class="">16 SNES Function norm 5.056651272898e+03 <br class="">17 SNES Function norm 5.056575190057e+03 <br class="">18 SNES Function norm 5.056574632598e+03 <br class="">19 SNES Function norm 5.056574520229e+03 <br class="">20 SNES Function norm 5.056574492569e+03 <br class="">21 SNES Function norm 5.056574485124e+03 <br class="">22 SNES Function norm 5.056574483029e+03 <br class="">23 SNES Function norm 5.056574482427e+03 <br class="">24 SNES Function norm 5.056574482302e+03 <br class="">25 SNES Function norm 5.056574482287e+03 <br class="">26 SNES Function norm 5.056574482282e+03 <br class="">27 SNES Function norm 5.056574482281e+03 <br class="">Nonlinear solve did not converge due to DIVERGED_LINE_SEARCH iterations 27<br class="">SNES Object: 1 MPI processes<br class="">type: newtonls<br class="">maximum iterations=50, maximum function evaluations=10000<br class="">tolerances: relative=1e-08, absolute=1e-50, solution=1e-08<br class="">total number of linear solver iterations=28<br class="">total number of function evaluations=323<br class="">total number of grid sequence refinements=2<br class="">SNESLineSearch Object: 1 MPI processes<br class=""> type: bt<br class=""> interpolation: cubic<br class=""> alpha=1.000000e-04<br class=""> maxstep=1.000000e+08, minlambda=1.000000e-12<br class=""> tolerances: relative=1.000000e-08, absolute=1.000000e-15, lambda=1.000000e-08<br class=""> maximum iterations=40<br class="">KSP Object: 1 MPI processes<br class=""> type: gmres<br class=""> GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement<br class=""> GMRES: happy breakdown tolerance 1e-30<br class=""> maximum iterations=10000, initial guess is zero<br class=""> tolerances: relative=1e-05, absolute=1e-50, divergence=10000<br class=""> left preconditioning<br class=""> using PRECONDITIONED norm type for convergence test<br class="">PC Object: 1 MPI processes<br class=""> type: lu<br class=""> LU: out-of-place factorization<br class=""> tolerance for zero pivot 2.22045e-14<br class=""> matrix ordering: nd<br class=""> factor fill ratio given 0, needed 0<br class=""> Factored matrix follows:<br class=""> Mat Object: 1 MPI processes<br class=""> type: seqaij<br class=""> rows=15991, cols=15991<br class=""> package used to perform factorization: mumps<br class=""> total: nonzeros=255801, allocated nonzeros=255801<br class=""> total number of mallocs used during MatSetValues calls =0<br class=""> MUMPS run parameters:<br class=""> SYM (matrix type): 0 <br class=""> PAR (host participation): 1 <br class=""> ICNTL(1) (output for error): 6 <br class=""> ICNTL(2) (output of diagnostic msg): 0 <br class=""> ICNTL(3) (output for global info): 0 <br class=""> ICNTL(4) (level of printing): 0 <br class=""> ICNTL(5) (input mat struct): 0 <br class=""> ICNTL(6) (matrix prescaling): 7 <br class=""> ICNTL(7) (sequentia matrix ordering):6 <br class=""> ICNTL(8) (scalling strategy): 77 <br class=""> ICNTL(10) (max num of refinements): 0 <br class=""> ICNTL(11) (error analysis): 0 <br class=""> ICNTL(12) (efficiency control): 1 <br class=""> ICNTL(13) (efficiency control): 0 <br class=""> ICNTL(14) (percentage of estimated workspace increase): 20 <br class=""> ICNTL(18) (input mat struct): 0 <br class=""> ICNTL(19) (Shur complement info): 0 <br class=""> ICNTL(20) (rhs sparse pattern): 0 <br class=""> ICNTL(21) (somumpstion struct): 0 <br class=""> ICNTL(22) (in-core/out-of-core facility): 0 <br class=""> ICNTL(23) (max size of memory can be allocated locally):0 <br class=""> ICNTL(24) (detection of null pivot rows): 0 <br class=""> ICNTL(25) (computation of a null space basis): 0 <br class=""> ICNTL(26) (Schur options for rhs or solution): 0 <br class=""> ICNTL(27) (experimental parameter): -8 <br class=""> ICNTL(28) (use parallel or sequential ordering): 1 <br class=""> ICNTL(29) (parallel ordering): 0 <br class=""> ICNTL(30) (user-specified set of entries in inv(A)): 0 <br class=""> ICNTL(31) (factors is discarded in the solve phase): 0 <br class=""> ICNTL(33) (compute determinant): 0 <br class=""> CNTL(1) (relative pivoting threshold): 0.01 <br class=""> CNTL(2) (stopping criterion of refinement): 1.49012e-08 <br class=""> CNTL(3) (absomumpste pivoting threshold): 0 <br class=""> CNTL(4) (vamumpse of static pivoting): -1 <br class=""> CNTL(5) (fixation for null pivots): 0 <br class=""> RINFO(1) (local estimated flops for the elimination after analysis): <br class=""> [0] 1.95838e+06 <br class=""> RINFO(2) (local estimated flops for the assembly after factorization): <br class=""> [0] 143924 <br class=""> RINFO(3) (local estimated flops for the elimination after factorization): <br class=""> [0] 1.95943e+06 <br class=""> INFO(15) (estimated size of (in MB) MUMPS internal data for running numerical factorization): <br class=""> [0] 7 <br class=""> INFO(16) (size of (in MB) MUMPS internal data used during numerical factorization): <br class=""> [0] 7 <br class=""> INFO(23) (num of pivots eliminated on this processor after factorization): <br class=""> [0] 15991 <br class=""> RINFOG(1) (global estimated flops for the elimination after analysis): 1.95838e+06 <br class=""> RINFOG(2) (global estimated flops for the assembly after factorization): 143924 <br class=""> RINFOG(3) (global estimated flops for the elimination after factorization): 1.95943e+06 <br class=""> (RINFOG(12) RINFOG(13))*2^INFOG(34) (determinant): (0,0)*(2^0)<br class=""> INFOG(3) (estimated real workspace for factors on all processors after analysis): 255801 <br class=""> INFOG(4) (estimated integer workspace for factors on all processors after analysis): 127874 <br class=""> INFOG(5) (estimated maximum front size in the complete tree): 11 <br class=""> INFOG(6) (number of nodes in the complete tree): 3996 <br class=""> INFOG(7) (ordering option effectively use after analysis): 6 <br class=""> INFOG(8) (structural symmetry in percent of the permuted matrix after analysis): 86 <br class=""> INFOG(9) (total real/complex workspace to store the matrix factors after factorization): 255865 <br class=""> INFOG(10) (total integer space store the matrix factors after factorization): 127890 <br class=""> INFOG(11) (order of largest frontal matrix after factorization): 11 <br class=""> INFOG(12) (number of off-diagonal pivots): 19 <br class=""> INFOG(13) (number of delayed pivots after factorization): 8 <br class=""> INFOG(14) (number of memory compress after factorization): 0 <br class=""> INFOG(15) (number of steps of iterative refinement after solution): 0 <br class=""> INFOG(16) (estimated size (in MB) of all MUMPS internal data for factorization after analysis: value on the most memory consuming processor): 7 <br class=""> INFOG(17) (estimated size of all MUMPS internal data for factorization after analysis: sum over all processors): 7 <br class=""> INFOG(18) (size of all MUMPS internal data allocated during factorization: value on the most memory consuming processor): 7 <br class=""> INFOG(19) (size of all MUMPS internal data allocated during factorization: sum over all processors): 7 <br class=""> INFOG(20) (estimated number of entries in the factors): 255801 <br class=""> INFOG(21) (size in MB of memory effectively used during factorization - value on the most memory consuming processor): 7 <br class=""> INFOG(22) (size in MB of memory effectively used during factorization - sum over all processors): 7 <br class=""> INFOG(23) (after analysis: value of ICNTL(6) effectively used): 0 <br class=""> INFOG(24) (after analysis: value of ICNTL(12) effectively used): 1 <br class=""> INFOG(25) (after factorization: number of pivots modified by static pivoting): 0 <br class=""> INFOG(28) (after factorization: number of null pivots encountered): 0<br class=""> INFOG(29) (after factorization: effective number of entries in the factors (sum over all processors)): 255865<br class=""> INFOG(30, 31) (after solution: size in Mbytes of memory used during solution phase): 5, 5<br class=""> INFOG(32) (after analysis: type of analysis done): 1<br class=""> INFOG(33) (value used for ICNTL(8)): 7<br class=""> INFOG(34) (exponent of the determinant if determinant is requested): 0<br class=""> linear system matrix = precond matrix:<br class=""> Mat Object: 1 MPI processes<br class=""> type: seqaij<br class=""> rows=15991, cols=15991<br class=""> total: nonzeros=223820, allocated nonzeros=431698<br class=""> total number of mallocs used during MatSetValues calls =15991<br class=""> using I-node routines: found 4000 nodes, limit used is 5<br class=""><br class=""><br class=""><br class=""><br class="">-gideon<br class=""><br class=""><blockquote type="cite" class="">On Sep 7, 2015, at 8:40 PM, Matthew Knepley <<a href="mailto:knepley@gmail.com" class="">knepley@gmail.com</a>> wrote:<br class=""><br class="">On Mon, Sep 7, 2015 at 7:32 PM, Gideon Simpson <<a href="mailto:gideon.simpson@gmail.com" class="">gideon.simpson@gmail.com</a>> wrote:<br class="">Barry,<br class=""><br class="">I finally got a chance to really try using the grid sequencing within my code. I find that, in some cases, even if it can solve successfully on the coarsest mesh, the SNES fails, usually due to a line search failure, when it tries to compute along the grid sequence. Would you have any suggestions?<br class=""><br class="">I apologize if I have asked before, but can you give me -snes_view for the solver? I could not find it in the email thread.<br class=""><br class="">I would suggest trying to fiddle with the line search, or precondition it with Richardson. It would be nice to see -snes_monitor<br class="">for the runs that fail, and then we can break down the residual into fields and look at it again (if my custom residual monitor<br class="">does not work we can write one easily). Seeing which part of the residual does not converge is key to designing the NASM<br class="">for the problem. I have just seen the virtuoso of this, Xiao-Chuan Cai, present it. We need better monitoring in PETSc.<br class=""><br class="">Thanks,<br class=""><br class=""> Matt<br class=""><br class="">-gideon<br class=""><br class=""><blockquote type="cite" class="">On Aug 28, 2015, at 4:21 PM, Barry Smith <<a href="mailto:bsmith@mcs.anl.gov" class="">bsmith@mcs.anl.gov</a>> wrote:<br class=""><br class=""><br class=""><blockquote type="cite" class="">On Aug 28, 2015, at 3:04 PM, Gideon Simpson <<a href="mailto:gideon.simpson@gmail.com" class="">gideon.simpson@gmail.com</a>> wrote:<br class=""><br class="">Yes, if i continue in this parameter on the coarse mesh, I can generally solve at all values. I do find that I need to do some amount of continuation to solve near the endpoint. The problem is that on the coarse mesh, things are not fully resolved at all the values along the continuation parameter, and I would like to do refinement. <br class=""><br class="">One subtlety is that I actually want the intermediate continuation solutions too. Currently, without doing any grid sequence, I compute each, write it to disk, and then go on to the next one. So I now need to go back an refine them. I was thinking that perhaps I could refine them on the fly, dump them to disk, and use the coarse solution as the starting guess at the next iteration, but that would seem to require resetting the snes back to the coarse grid.<br class=""><br class="">The alternative would be to just script the mesh refinement in a post processing stage, where each value of the continuation is parameter is loaded on the coarse mesh, and refined. Perhaps that’s the most practical thing to do.<br class=""></blockquote><br class="">I would do the following. Create your DM and create a SNES that will do the continuation<br class=""><br class="">loop over continuation parameter<br class=""><br class=""> SNESSolve(snes,NULL,Ucoarse);<br class=""><br class=""> if (you decide you want to see the refined solution at this continuation point) {<br class=""> SNESCreate(comm,&snesrefine);<br class=""> SNESSetDM()<br class=""> etc<br class=""> SNESSetGridSequence(snesrefine,)<br class=""> SNESSolve(snesrefine,0,Ucoarse);<br class=""> SNESGetSolution(snesrefine,&Ufine);<br class=""> VecView(Ufine or do whatever you want to do with the Ufine at that continuation point<br class=""> SNESDestroy(snesrefine);<br class=""> end if<br class=""><br class="">end loop over continuation parameter.<br class=""><br class="">Barry<br class=""><br class=""><blockquote type="cite" class=""><br class="">-gideon<br class=""><br class=""><blockquote type="cite" class="">On Aug 28, 2015, at 3:55 PM, Barry Smith <<a href="mailto:bsmith@mcs.anl.gov" class="">bsmith@mcs.anl.gov</a>> wrote:<br class=""><br class=""><blockquote type="cite" class=""><br class=""><br class="">3. This problem is actually part of a continuation problem that roughly looks like this <br class=""><br class="">for( continuation parameter p = 0 to 1){<br class=""><br class=""><span class="Apple-tab-span" style="white-space:pre"> </span>solve with parameter p_i using solution from p_{i-1},<br class="">}<br class=""><br class="">What I would like to do is to start the solver, for each value of parameter p_i on the coarse mesh, and then do grid sequencing on that. But it appears that after doing grid sequencing on the initial p_0 = 0, the SNES is set to use the finer mesh.<br class=""></blockquote><br class="">So you are using continuation to give you a good enough initial guess on the coarse level to even get convergence on the coarse level? First I would check if you even need the continuation (or can you not even solve the coarse problem without it).<br class=""><br class="">If you do need the continuation then you will need to tweak how you do the grid sequencing. I think this will work: <br class=""><br class="">Do not use -snes_grid_sequencing <br class=""><br class="">Run SNESSolve() as many times as you want with your continuation parameter. This will all happen on the coarse mesh.<br class=""><br class="">Call SNESSetGridSequence()<br class=""><br class="">Then call SNESSolve() again and it will do one solve on the coarse level and then interpolate to the next level etc.<br class=""></blockquote><br class=""></blockquote><br class=""></blockquote><br class=""><br class=""><br class=""><br class="">-- <br class="">What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br class="">-- Norbert Wiener<br class=""></blockquote><br class=""></blockquote><br class=""></blockquote><br class=""></blockquote><br class=""></blockquote><br class=""></blockquote><br class=""></div></blockquote></div><br class=""></div></body></html>