[petsc-users] conditioning of snes with dmcomposite & grid sequencing

Barry Smith bsmith at mcs.anl.gov
Sat Sep 12 15:43:55 CDT 2015


> On Sep 12, 2015, at 1:57 PM, Matthew Knepley <knepley at gmail.com> wrote:
> 
> On Sat, Sep 12, 2015 at 1:54 PM, Gideon Simpson <gideon.simpson at gmail.com> wrote:
> I’m not sure how good a problem this would be as a toy model, but, once this is all done, I’d be happy to share the code.  
> 
> I’ve got a few comments on this:
> 
> 1.  The underlying problem is in the study of derivative nonlinear Schrödinger equations - see http://arxiv.org/abs/1301.1048 for details of the equation.  The problem has features of the Airy equation, which as you’d guess, has exactly this problem of highly oscillatory structures which don’t get along with linear interpolation.
> 
> 2.  The problem is complex valued with a real and imaginary components at each site on the mesh.  At the origin, the boundary conditions are Dirichlet on the imaginary component, and Neumann on the real component.  This led me to make the value of the real component at the origin, u0, one of the three parameters in the first component of the DMComposite structure.  Consequently, (u0, 0), the values of the function at the origin, should be involved in the interpolation strategy.


    I do not understand. You have the real part of the solution at the origin of u0 stored in the "redundant" part of the DMComposite? What about the on the DMDA mesh? Wouldn't the real part also be there and with the same value? Thus when you interpolate with the solution on the DMDA you would still get the correct values "near" the origin? Hence there is no reason to "involve" the u0 stored in the redundant part in the interpolation?

  Barry

  
>  Is there a way, through ghost points, for the current DM interpolation strategy to make use of these under refinement?  Does it use ghost point values?
> 
> 3.  Is DMShellSetCreateInterpolation what I want to use here?  Where I would define my own interpolation strategy across dm’s?  
> 
> Ah, crap. This breaks the current DMComposite interpolation (in packm.c), which assumes that it is block diagonal. However, it produces
> a MatNest object, so we could conceivably just stick in another block, but that would not be general in any sense.
> 
>   Thanks,
> 
>      Matt
>  
> -gideon
> 
>> On Sep 12, 2015, at 2:36 PM, Matthew Knepley <knepley at gmail.com> wrote:
>> 
>> On Sat, Sep 12, 2015 at 1:26 PM, Gideon Simpson <gideon.simpson at gmail.com> wrote:
>> Got it.  I wanted to follow up on something that I had been suspicious of for some time.  I took my initial solution, the one which we want to apply grid sequencing to, and did the following.  I loved into spicy, and manually did grid refinement using SciPy, dumped to disk, and then ran that through petsc, with the refined mesh.  When I say “manual” here, I just mean that I called the SciPy spline interplant commands on uniform meshes that had twice the resolution, not that I manually picked out the refinement points.  When I use a linear interpolant, and run my solver on it, it fails miserably, just as the first grid sequence refinement does.  The numbers are a little different, but the residuals are still O(100).
>> 
>> If, instead, I use a second order interpolant, it solves it 11 iterations, wiithout a problem, and the result is physically consistent.  While this may be two sides of the same coin, it makes me think that the real challenge in my is the interpolation strategy used under grid refinement.  Is there a way to, manually, I suppose, set the interpolation that will be used under the grid sequence?
>> 
>> Excellent! And also somewhat bizarre. This would be great to analyze. Is there any bribe we could give you to
>> convert this to a real PETSc example? It would be a nice way to show that dumping in high frequency energy
>> when interpolating can be bad for coupled problems.
>> 
>> Actual Solution: I think this will involve some coding. The quick and dirty way is to just stick the code you want
>> into dainterp.c:DMCreateInterpolation_DA(). If that works, we add it to interptype and enable it from the command
>> line. It would be nice to have a spline interpolant.
>> 
>>   Thanks,
>> 
>>      Matt
>>  
>> -gideon
>> 
>>> On Sep 12, 2015, at 7:14 AM, Matthew Knepley <knepley at gmail.com> wrote:
>>> 
>>> On Fri, Sep 11, 2015 at 4:30 PM, Gideon Simpson <gideon.simpson at gmail.com> wrote:
>>> Are there any built in routines for freezing variables in SNES, or will that need to be handled by hand.
>>> 
>>> We used to have this for KSP, but it looks like someone removed it. The only thing left is KSPMonitorRange().
>>> What we did is find the few largest residual elements, take a small halo around them, project the problem to this
>>> small space using MatGetSubMatrix() and VecScatter, solve, and VecScatter back.
>>> 
>>> We do not have this for nonlinear stuff (like many other things) because there is no explicit matrix to manipulate,
>>> and language support for computing only parts of the nonlinear function is really weak. What we really want
>>> is something like what Victor Eijkout was proposing a few years ago, namely automatic discovery of index sets
>>> for communication, in this case with main memory. We would need the residual code, given an output set, to tell
>>> us what input set is needed. Then we make a Scatter, select part of the DM, and we could compute. Now it has
>>> to be done by hand.
>>> 
>>>   Thanks,
>>> 
>>>      Matt
>>>  
>>> Also, I remain curious about the starting guess that the grid sequence uses during each refinement.  Is there a way to dump those to disk for inspection?
>>> 
>>> -gideon
>>> 
>>>> On Sep 11, 2015, at 4:05 PM, Matthew Knepley <knepley at gmail.com> wrote:
>>>> 
>>>> On Fri, Sep 11, 2015 at 1:05 PM, Gideon Simpson <gideon.simpson at gmail.com> wrote:
>>>> Since the problem has not only the two components in the DM, but the second component has 4 degrees of freedom per mesh point, I thought it best to do the post processing separately.  See attached
>>>> 
>>>> So the whole thing is being controlled by 1 variable.
>>>> 
>>>> We should try freezing everything else, and just solving that scalar equation I guess.
>>>> 
>>>>    Matt
>>>>  
>>>> -gideon
>>>> <Screen Shot 2015-09-11 at 2.04.25 PM.png>
>>>> 
>>>>> On Sep 11, 2015, at 10:16 AM, Matthew Knepley <knepley at gmail.com> wrote:
>>>>> 
>>>>> On Fri, Sep 11, 2015 at 9:08 AM, Gideon Simpson <gideon.simpson at gmail.com> wrote:
>>>>> Following up on the previous thread, for my dm composite problem, I find that at the end of the first grid sequence,where it fails to converge, the distribution of the norms between the two pieces are:
>>>>> 
>>>>>  39 SNES Function norm 2.253098577796e+02 
>>>>>  40 SNES Function norm 2.253098577331e+02 
>>>>>  41 SNES Function norm 2.253098577228e+02 
>>>>>  42 SNES Function norm 2.253098577212e+02 
>>>>>  43 SNES Function norm 2.253098577174e+02 
>>>>>  44 SNES Function norm 2.253098577166e+02 
>>>>>  45 SNES Function norm 2.253098577158e+02 
>>>>>  46 SNES Function norm 2.253098577157e+02 
>>>>>  47 SNES Function norm 2.253098577156e+02 
>>>>>  48 SNES Function norm 2.253098577156e+02 
>>>>> Nonlinear solve did not converge due to DIVERGED_LINE_SEARCH iterations 48
>>>>>  ||r|| = 225.31, 7999 entries
>>>>>  ||rp|| = 140.021, 3 entries
>>>>>  ||rQ|| = 176.518, 7996 entries
>>>>> 
>>>>> Since I think we were convinced that this was intrinsic to the problem, and not a function of the Jacobian function, I am using my Jacobian.
>>>>> 
>>>>> Okay, I see no pattern in the fields. Lets plot these 2 vectors, -vec_view draw, and screenshot.
>>>>> 
>>>>>   Matt
>>>>>  
>>>>> -gideon
>>>>> 
>>>>> 
>>>>> 
>>>>> -- 
>>>>> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.
>>>>> -- Norbert Wiener
>>>> 
>>>> 
>>>> 
>>>> 
>>>> -- 
>>>> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.
>>>> -- Norbert Wiener
>>> 
>>> 
>>> 
>>> 
>>> -- 
>>> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.
>>> -- Norbert Wiener
>> 
>> 
>> 
>> 
>> -- 
>> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.
>> -- Norbert Wiener
> 
> 
> 
> 
> -- 
> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.
> -- Norbert Wiener



More information about the petsc-users mailing list