<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">On Mon, Oct 23, 2017 at 8:30 PM, zakaryah . <span dir="ltr"><<a href="mailto:zakaryah@gmail.com" target="_blank">zakaryah@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div style="font-size:small">Thanks Barry. For my problem, restriction seems more natural. I still don't understand how to actually introduce the field. As I understand it, the multi-grid procedures (coarsening, interpolating, etc.) are performed on the state variables, which for my problem can be naturally represented by a DMDA. I imagine for the external fields, I create a second DMDA, but I'm not sure if I somehow couple it to the state variable DMDA or just pass it through the user defined context or what.</div><div style="font-size:small"><br></div><div style="font-size:small">I have another question about the DMComposite. How do I calculate the value of the function for the redundant DM, as it depends on the state variables at all grid locations? The example ex21 is easy, because the function for the redundant variable depends only on a Lagrange multiplier which is known to belong to the first processor. I am hoping that I can do something like increment to the function value for the redundant DM? Then everything is safe because it happens between VecGetArray and VecRestoreArray? Or, is the thread safety due to the DMCompositeScatter/<wbr>DMCompositeGather calls? In that case, am I forced to use ADD_VALUES?</div></div></blockquote><div><br></div><div>If its the global vector, one guy will have the value and everyone else will have no values. Then you just stick in the value on that process.</div><div>To calculate the value, just have everyone calculate the local value and then use MPI_Reduce() to get the value to stick in.</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div style="font-size:small">My last question, for now. Matt - you said it would be tricky to preallocate the composite matrix, and that I should turn off allocation and just stick in what I need. Does this mean I should call DMSetMatrixStructureOnly with PETSC_TRUE? In that case, can I assume that the initial assembly of the matrix will be slow due to allocation on the fly, but since the matrix always has the same non-zero structure, this is not an issue for the many repeated uses of the matrix that will occur?</div></div></blockquote><div><br></div><div>No. Use the matrix the DM gives you, but it will have no allocation for the coupling. So call MatSetOption() with PETSC_FALSE for the ALLOCATION_ERR flag.</div><div><br></div><div> Thanks,</div><div><br></div><div> Matt</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div style="font-size:small">Thanks so much for all the help.</div></div><div class="HOEnZb"><div class="h5"><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Oct 23, 2017 at 3:37 PM, Barry Smith <span dir="ltr"><<a href="mailto:bsmith@mcs.anl.gov" target="_blank">bsmith@mcs.anl.gov</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span><br>
> On Oct 23, 2017, at 2:13 PM, zakaryah . <<a href="mailto:zakaryah@gmail.com" target="_blank">zakaryah@gmail.com</a>> wrote:<br>
><br>
> Thanks Matt and Jed, the suggestion of DMComposite is perfect for what I want to do. Thanks Lukasz, as your implementation is also illuminating.<br>
><br>
> I'm working on the code now. I've realized that this is a good opportunity to set up the code so that it will work properly with multigrid. My problem has a dependence on some external fields. In other words, there is constant data at each point in the DMDA. I don't know how to implement those so that they will be scaled properly as the grid is coarsened/refined. Any hints?<br>
<br>
</span> Depending on the meaning of the fields to coarsen one can use either injection (just grab values on the coarse grid points) or restriction (normally this is the transpose of linear interpolation and sort of an averages values near the coarse grid points. To refine one generally uses linear interpolation. The DMDA can provide these operations (it also provides them to the geometric multigrid solver). DMCreateInjection(), DMCreateInterpolation(), DMCreateRestriction().<br>
<span class="m_-4580612382856558749HOEnZb"><font color="#888888"><br>
Barry<br>
</font></span><div class="m_-4580612382856558749HOEnZb"><div class="m_-4580612382856558749h5"><br>
><br>
> Thanks again - it's amazing to me how thorough the PETSc methods are, and the ease with which the user can access so many powerful methods, while your support is so knowledgeable and responsive.<br>
><br>
> On Sun, Oct 22, 2017 at 11:41 AM, Jed Brown <<a href="mailto:jed@jedbrown.org" target="_blank">jed@jedbrown.org</a>> wrote:<br>
> Alternatively, see DMComposite and src/snes/examples/tutorials/ex<wbr>22.c.<br>
><br>
> Lukasz Kaczmarczyk <<a href="mailto:Lukasz.Kaczmarczyk@glasgow.ac.uk" target="_blank">Lukasz.Kaczmarczyk@glasgow.ac<wbr>.uk</a>> writes:<br>
><br>
> > On 22 Oct 2017, at 03:16, zakaryah . <<a href="mailto:zakaryah@gmail.com" target="_blank">zakaryah@gmail.com</a><mailto:<a href="mailto:zakaryah@gmail.com" target="_blank">zak<wbr>aryah@gmail.com</a>>> wrote:<br>
> ><br>
> > OK, it turns out Lukasz was exactly correct. With whatever method I try, the solver or stepper approaches a critical point, which is associated with some kind of snap-through. I have looked into the control techniques and they are pretty ingenious, and I think they should work for my problem, in that I hope to continue through the critical point. I have a technical question about the implementation, though.<br>
> ><br>
> > Following Riks 1979 for example, the control parameter is the approximate arc-length in the phase space of loading intensity and displacements. It represents one additional variable in the system, and there is one additional equation in the system (in Riks, this is eq. 3.9).<br>
> ><br>
> > In my implementation, the displacements are implemented as a DMDA with 3 dof, since I'm working in 3D. I'm not sure about the best way to add the single additional variable and equality. The way I see it, I either give up on using the DMDA, in which case I'm not sure how to efficiently implement the stencil I need to calculate spatial derivatives of the displacements, or I have to add a rather large number of extra variables. For example, if my DMDA is WxHxD, I would have to make it (W+1)xHxD, and each of the extra HxD variables will have 3 dof. Then 3xHxD-1 variables are in the nullspace (because they don't represent anything, so I would have to add a bunch of zeros to the function and the Jacobian), while the remaining variable is used as the control parameter. I'm aware of other methods, e.g. Crisfield 1983, but I'm interested in whether there is a straightforward way to implement Riks' method in PETSc. I'm sure I'm missing something so hopefully someone can give me some hints.<br>
> ><br>
> > Thanks for all the help!<br>
> ><br>
> ><br>
> > Zakaryah,<br>
> ><br>
> > If you like to have a peek how we doing that, you can see<br>
> > <a href="http://mofem.eng.gla.ac.uk/mofem/html/_arc_length_tools_8hpp_source.html" rel="noreferrer" target="_blank">http://mofem.eng.gla.ac.uk/mof<wbr>em/html/_arc_length_tools_8hpp<wbr>_source.html</a><br>
> > <a href="http://mofem.eng.gla.ac.uk/mofem/html/_arc_length_tools_8cpp_source.html" rel="noreferrer" target="_blank">http://mofem.eng.gla.ac.uk/mof<wbr>em/html/_arc_length_tools_8cpp<wbr>_source.html</a><br>
> ><br>
> > The implementation is specific features related to MoFEM implementation. However, you can follow the same idea; implement shell matrix, which adds column and row with controlling and controlled equation, respectively, This shell matrix has to have an operator for matrix-vector multiplication. Then you have to add preconditioner, which is based on Riks and others. In fact you can use as well FieldSplit pre-conditioner, Riks method is some variant of Schur complement.<br>
> ><br>
> > Such implementation allows running multi-grid preconditioner and other preconditions with control equation.<br>
> ><br>
> > Hope that this will be helpful.<br>
> ><br>
> > Regards,<br>
> > Lukasz<br>
> ><br>
> ><br>
> ><br>
> > On Thu, Oct 12, 2017 at 2:02 PM, zakaryah . <<a href="mailto:zakaryah@gmail.com" target="_blank">zakaryah@gmail.com</a><mailto:<a href="mailto:zakaryah@gmail.com" target="_blank">zak<wbr>aryah@gmail.com</a>>> wrote:<br>
> > Thanks for the response, Matt - these are excellent questions.<br>
> ><br>
> > On theoretical grounds, I am certain that the solution to the continuous PDE exists. Without any serious treatment, I think this means the discretized system should have a solution up to discretization error, but perhaps this is indeed a bad approach.<br>
> ><br>
> > I am not sure whether the equations are "really hard to solve". At each point, the equations are third order polynomials of the state variable at that point and at nearby points (i.e. in the stencil). One possible complication is that the external forces which are applied to the interior of the material can be fairly complex - they are smooth, but they can have many inflection points.<br>
> ><br>
> > I don't have a great test case for which I know a good solution. To my thinking, there is no way that time-stepping the parabolic version of the same PDE can fail to yield a solution at infinite time. So, I'm going to try starting there. Converting the problem to a minimization is a bit trickier, because the discretization has to be performed one step earlier in the calculation, and therefore the gradient and Hessian would need to be recalculated.<br>
> ><br>
> > Even if there are some problems with time-stepping (speed of convergence?), maybe I can use the solutions as better test cases for the elliptic PDE solved via SNES.<br>
> ><br>
> > Can you give me any additional lingo or references for the fracture problem?<br>
> ><br>
> > Thanks, Zak<br>
> ><br>
> > On Wed, Oct 11, 2017 at 8:53 PM, Matthew Knepley <<a href="mailto:knepley@gmail.com" target="_blank">knepley@gmail.com</a><mailto:<a href="mailto:knepley@gmail.com" target="_blank">knep<wbr>ley@gmail.com</a>>> wrote:<br>
> > On Wed, Oct 11, 2017 at 11:33 AM, zakaryah . <<a href="mailto:zakaryah@gmail.com" target="_blank">zakaryah@gmail.com</a><mailto:<a href="mailto:zakaryah@gmail.com" target="_blank">zak<wbr>aryah@gmail.com</a>>> wrote:<br>
> > Many thanks for the suggestions, Matt.<br>
> ><br>
> > I tried putting the solvers in a loop, like this:<br>
> ><br>
> > do {<br>
> > NewtonLS<br>
> > check convergence<br>
> > if (converged) break<br>
> > NRichardson or NGMRES<br>
> > } while (!converged)<br>
> ><br>
> > The results were interesting, to me at least. With NRichardson, there was indeed improvement in the residual norm, followed by improvement with NewtonLS, and so on for a few iterations of this loop. In each case, after a few iterations the NewtonLS appeared to be stuck in the same way as after the first iteration. Eventually neither method was able to reduce the residual norm, which was still significant, so this was not a total success. With NGMRES, the initial behavior was similar, but eventually the NGMRES progress became erratic. The minimal residual norm was a bit better using NGMRES than NRichardson, but neither combination of methods fully converged. For both NRichardson and NGMRES, I simply used the defaults, as I have no knowledge of how to tune the options for my problem.<br>
> ><br>
> > Are you certain that the equations have a solution? I become a little concerned when richardson stops converging. Its<br>
> > still possible you have really hard to solve equations, it just becomes less likely. And even if they truly are hard to solve,<br>
> > then there should be physical reasons for this. For example, it could be that discretizing the minimizing PDE is just the<br>
> > wrong thing to do. I believe this is the case in fracture, where you attack the minimization problem directly.<br>
> ><br>
> > Matt<br>
> ><br>
> > On Tue, Oct 10, 2017 at 4:08 PM, Matthew Knepley <<a href="mailto:knepley@gmail.com" target="_blank">knepley@gmail.com</a><mailto:<a href="mailto:knepley@gmail.com" target="_blank">knep<wbr>ley@gmail.com</a>>> wrote:<br>
> > On Tue, Oct 10, 2017 at 12:08 PM, zakaryah . <<a href="mailto:zakaryah@gmail.com" target="_blank">zakaryah@gmail.com</a><mailto:<a href="mailto:zakaryah@gmail.com" target="_blank">zak<wbr>aryah@gmail.com</a>>> wrote:<br>
> > Thanks for clearing that up.<br>
> ><br>
> > I'd appreciate any further help. Here's a summary:<br>
> ><br>
> > My ultimate goal is to find a vector field which minimizes an action. The action is a (nonlinear) function of the field and its first spatial derivatives.<br>
> ><br>
> > My current approach is to derive the (continuous) Euler-Lagrange equations, which results in a nonlinear PDE that the minimizing field must satisfy. These Euler-Lagrange equations are then discretized, and I'm trying to use an SNES to solve them.<br>
> ><br>
> > The problem is that the solver seems to reach a point at which the Jacobian (this corresponds to the second variation of the action, which is like a Hessian of the energy) becomes nearly singular, but where the residual (RHS of PDE) is not close to zero. The residual does not decrease over additional SNES iterations, and the line search results in tiny step sizes. My interpretation is that this point of stagnation is a critical point.<br>
> ><br>
> > The normal thing to do here (I think) is to engage solvers which do not depend on that particular point. So using<br>
> > NRichardson, or maybe NGMRES, to get past that. I would be interested to see if this is successful.<br>
> ><br>
> > Matt<br>
> ><br>
> > I have checked the hand-coded Jacobian very carefully and I am confident that it is correct.<br>
> ><br>
> > I am guessing that such a situation is well-known in the field, but I don't know the lingo or literature. If anyone has suggestions I'd be thrilled. Are there documentation/methodologies within PETSc for this type of situation?<br>
> ><br>
> > Is there any advantage to discretizing the action itself and using the optimization routines? With minor modifications I'll have the gradient and Hessian calculations coded. Are the optimization routines likely to stagnate in the same way as the nonlinear solver, or can they take advantage of the structure of the problem to overcome this?<br>
> ><br>
> > Thanks a lot in advance for any help.<br>
> ><br>
> > On Sun, Oct 8, 2017 at 5:57 AM, Barry Smith <<a href="mailto:bsmith@mcs.anl.gov" target="_blank">bsmith@mcs.anl.gov</a><mailto:<a href="mailto:bsmith@mcs.anl.gov" target="_blank">bsm<wbr>ith@mcs.anl.gov</a>>> wrote:<br>
> ><br>
> > There is apparently confusing in understanding the ordering. Is this all on one process that you get funny results? Are you using MatSetValuesStencil() to provide the matrix (it is generally easier than providing it yourself). In parallel MatView() always maps the rows and columns to the natural ordering before printing, if you use a matrix created from the DMDA. If you create the matrix yourself it has a different MatView in parallel that is in in thePETSc ordering.\<br>
> ><br>
> ><br>
> > Barry<br>
> ><br>
> ><br>
> ><br>
> >> On Oct 8, 2017, at 8:05 AM, zakaryah . <<a href="mailto:zakaryah@gmail.com" target="_blank">zakaryah@gmail.com</a><mailto:<a href="mailto:zakaryah@gmail.com" target="_blank">zak<wbr>aryah@gmail.com</a>>> wrote:<br>
> >><br>
> >> I'm more confused than ever. I don't understand the output of -snes_type test -snes_test_display.<br>
> >><br>
> >> For the user-defined state of the vector (where I'd like to test the Jacobian), the finite difference Jacobian at row 0 evaluates as:<br>
> >><br>
> >> row 0: (0, 10768.6) (1, 2715.33) (2, -1422.41) (3, -6121.71) (4, 287.797) (5, 744.695) (9, -1454.66) (10, 6.08793) (11, 148.172) (12, 13.1089) (13, -36.5783) (14, -9.99399) (27, -3423.49) (28, -2175.34) (29, 548.662) (30, 145.753) (31, 17.6603) (32, -15.1079) (36, 76.8575) (37, 16.325) (38, 4.83918)<br>
> >><br>
> >> But the hand-coded Jacobian at row 0 evaluates as:<br>
> >><br>
> >> row 0: (0, 10768.6) (1, 2715.33) (2, -1422.41) (3, -6121.71) (4, 287.797) (5, 744.695) (33, -1454.66) (34, 6.08792) (35, 148.172) (36, 13.1089) (37, -36.5783) (38, -9.99399) (231, -3423.49) (232, -2175.34) (233, 548.662) (234, 145.753) (235, 17.6603) (236, -15.1079) (264, 76.8575) (265, 16.325) (266, 4.83917) (267, 0.) (268, 0.) (269, 0.)<br>
> >> and the difference between the Jacobians at row 0 evaluates as:<br>
> >><br>
> >> row 0: (0, 0.000189908) (1, 7.17315e-05) (2, 9.31778e-05) (3, 0.000514947) (4, 0.000178659) (5, 0.000178217) (9, -2.25457e-05) (10, -6.34278e-06) (11, -5.93241e-07) (12, 9.48544e-06) (13, 4.79709e-06) (14, 2.40016e-06) (27, -0.000335696) (28, -0.000106734) (29, -0.000106653) (30, 2.73119e-06) (31, -7.93382e-07) (32, 1.24048e-07) (36, -4.0302e-06) (37, 3.67494e-06) (38, -2.70115e-06) (39, 0.) (40, 0.) (41, 0.)<br>
> >><br>
> >> The difference between the column numbering between the finite difference and the hand-coded Jacobians looks like a serious problem to me, but I'm probably missing something.<br>
> >><br>
> >> I am trying to use a 3D DMDA with 3 dof, a box stencil of width 1, and for this test problem the grid dimensions are 11x7x6. For a grid point x,y,z, and dof c, is the index calculated as c + 3*x + 3*11*y + 3*11*7*z? If so, then the column numbers of the hand-coded Jacobian match those of the 27 point stencil I have in mind. However, I am then at a loss to explain the column numbers in the finite difference Jacobian.<br>
> >><br>
> >><br>
> >><br>
> >><br>
> >> On Sat, Oct 7, 2017 at 1:49 PM, zakaryah . <<a href="mailto:zakaryah@gmail.com" target="_blank">zakaryah@gmail.com</a><mailto:<a href="mailto:zakaryah@gmail.com" target="_blank">zak<wbr>aryah@gmail.com</a>>> wrote:<br>
> >> OK - I ran with -snes_monitor -snes_converged_reason -snes_linesearch_monitor -ksp_monitor -ksp_monitor_true_residual -ksp_converged_reason -pc_type svd -pc_svd_monitor -snes_type newtonls -snes_compare_explicit<br>
> >><br>
> >> and here is the full error message, output immediately after<br>
> >><br>
> >> Finite difference Jacobian<br>
> >> Mat Object: 24 MPI processes<br>
> >> type: mpiaij<br>
> >><br>
> >> [0]PETSC ERROR: --------------------- Error Message ------------------------------<wbr>------------------------------<wbr>--<br>
> >><br>
> >> [0]PETSC ERROR: Invalid argument<br>
> >><br>
> >> [0]PETSC ERROR: Matrix not generated from a DMDA<br>
> >><br>
> >> [0]PETSC ERROR: See <a href="http://www.mcs.anl.gov/petsc/documentation/faq.html" rel="noreferrer" target="_blank">http://www.mcs.anl.gov/petsc/d<wbr>ocumentation/faq.html</a> for trouble shooting.<br>
> >><br>
> >> [0]PETSC ERROR: Petsc Release Version 3.7.6, Apr, 24, 2017<br>
> >><br>
> >> [0]PETSC ERROR: ./CalculateOpticalFlow on a arch-linux2-c-opt named node046.hpc.rockefeller.intern<wbr>al by zfrentz Sat Oct 7 13:44:44 2017<br>
> >><br>
> >> [0]PETSC ERROR: Configure options --prefix=/ru-auth/local/home/z<wbr>frentz/PETSc-3.7.6 --download-fblaslapack -with-debugging=0<br>
> >><br>
> >> [0]PETSC ERROR: #1 MatView_MPI_DA() line 551 in /rugpfs/fs0/home/zfrentz/PETSc<wbr>/build/petsc-3.7.6/src/dm/<wbr>impls/da/fdda.c<br>
> >><br>
> >> [0]PETSC ERROR: #2 MatView() line 901 in /rugpfs/fs0/home/zfrentz/PETSc<wbr>/build/petsc-3.7.6/src/mat/<wbr>interface/matrix.c<br>
> >><br>
> >> [0]PETSC ERROR: #3 SNESComputeJacobian() line 2371 in /rugpfs/fs0/home/zfrentz/PETSc<wbr>/build/petsc-3.7.6/src/snes/<wbr>interface/snes.c<br>
> >><br>
> >> [0]PETSC ERROR: #4 SNESSolve_NEWTONLS() line 228 in /rugpfs/fs0/home/zfrentz/PETSc<wbr>/build/petsc-3.7.6/src/snes/<wbr>impls/ls/ls.c<br>
> >><br>
> >> [0]PETSC ERROR: #5 SNESSolve() line 4005 in /rugpfs/fs0/home/zfrentz/PETSc<wbr>/build/petsc-3.7.6/src/snes/<wbr>interface/snes.c<br>
> >><br>
> >> [0]PETSC ERROR: #6 solveWarp3D() line 659 in /ru-auth/local/home/zfrentz/Co<wbr>de/OpticalFlow/working/October<wbr>6_2017/mshs.c<br>
> >><br>
> >><br>
> >> On Tue, Oct 3, 2017 at 5:37 PM, Jed Brown <<a href="mailto:jed@jedbrown.org" target="_blank">jed@jedbrown.org</a><mailto:<a href="mailto:jed@jedbrown.org" target="_blank">jed@j<wbr>edbrown.org</a>>> wrote:<br>
> >> Always always always send the whole error message.<br>
> >><br>
> >> "zakaryah ." <<a href="mailto:zakaryah@gmail.com" target="_blank">zakaryah@gmail.com</a><mailto:<a href="mailto:zakaryah@gmail.com" target="_blank">zak<wbr>aryah@gmail.com</a>>> writes:<br>
> >><br>
> >> > I tried -snes_compare_explicit, and got the following error:<br>
> >> ><br>
> >> > [0]PETSC ERROR: Invalid argument<br>
> >> ><br>
> >> > [0]PETSC ERROR: Matrix not generated from a DMDA<br>
> >> ><br>
> >> > What am I doing wrong?<br>
> >> ><br>
> >> > On Tue, Oct 3, 2017 at 10:08 AM, Jed Brown <<a href="mailto:jed@jedbrown.org" target="_blank">jed@jedbrown.org</a><mailto:<a href="mailto:jed@jedbrown.org" target="_blank">jed@j<wbr>edbrown.org</a>>> wrote:<br>
> >> ><br>
> >> >> Barry Smith <<a href="mailto:bsmith@mcs.anl.gov" target="_blank">bsmith@mcs.anl.gov</a><mailto:<a href="mailto:bsmith@mcs.anl.gov" target="_blank">bsm<wbr>ith@mcs.anl.gov</a>>> writes:<br>
> >> >><br>
> >> >> >> On Oct 3, 2017, at 5:54 AM, zakaryah . <<a href="mailto:zakaryah@gmail.com" target="_blank">zakaryah@gmail.com</a><mailto:<a href="mailto:zakaryah@gmail.com" target="_blank">zak<wbr>aryah@gmail.com</a>>> wrote:<br>
> >> >> >><br>
> >> >> >> I'm still working on this. I've made some progress, and it looks like<br>
> >> >> the issue is with the KSP, at least for now. The Jacobian may be<br>
> >> >> ill-conditioned. Is it possible to use -snes_test_display during an<br>
> >> >> intermediate step of the analysis? I would like to inspect the Jacobian<br>
> >> >> after several solves have already completed,<br>
> >> >> ><br>
> >> >> > No, our currently code for testing Jacobians is poor quality and<br>
> >> >> poorly organized. Needs a major refactoring to do things properly. Sorry<br>
> >> >><br>
> >> >> You can use -snes_compare_explicit or -snes_compare_coloring to output<br>
> >> >> differences on each Newton step.<br>
> >> >><br>
> >><br>
> >><br>
> ><br>
> ><br>
> ><br>
> ><br>
> ><br>
> > --<br>
> > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>
> > -- Norbert Wiener<br>
> ><br>
> > <a href="https://www.cse.buffalo.edu/~knepley/" rel="noreferrer" target="_blank">https://www.cse.buffalo.edu/~k<wbr>nepley/</a><<a href="http://www.caam.rice.edu/~mk51/" rel="noreferrer" target="_blank">http://www.caam.rice.e<wbr>du/~mk51/</a>><br>
> ><br>
> ><br>
> ><br>
> ><br>
> > --<br>
> > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>
> > -- Norbert Wiener<br>
> ><br>
> > <a href="https://www.cse.buffalo.edu/~knepley/" rel="noreferrer" target="_blank">https://www.cse.buffalo.edu/~k<wbr>nepley/</a><<a href="http://www.caam.rice.edu/~mk51/" rel="noreferrer" target="_blank">http://www.caam.rice.e<wbr>du/~mk51/</a>><br>
<br>
</div></div></blockquote></div><br></div>
</div></div></blockquote></div><br><br clear="all"><div><br></div>-- <br><div class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener</div><div><br></div><div><a href="http://www.caam.rice.edu/~mk51/" target="_blank">https://www.cse.buffalo.edu/~knepley/</a><br></div></div></div></div></div>
</div></div>