[petsc-dev] coding style

Oxberry, Geoffrey Malcolm oxberry1 at llnl.gov
Thu Aug 18 17:37:34 CDT 2016


> On Aug 18, 2016, at 3:17 PM, Munson, Todd <tmunson at mcs.anl.gov> wrote:
> 
> 
> For now, I am not proposing interface changes, but rather answering the
> question of what types of problems do we need to support.  We can 
> discuss actual interfaces later.
> 
> Note: you really only need one multiplier for the each of the 
> constraints (maybe interior-point methods are different).
> The sign changes depending on the bound that is active.

I don’t believe interior-point methods work this way; they typically maintain estimates of duality multipliers, and inequality constraints aren’t necessarily active until convergence. When these methods converge, complementary slackness holds approximately, and I believe a crossover algorithm must be implemented to determine the active constraints, at which point, complementary slackness would be satisfied more exactly.

>  
> 
>>> I am not a huge fan of separating into equality constraints and range constraints, 
>>> but we can keep it.  Having only equality constraints does make the problem much 
>>> easier to solve; no need to identify an active set.
>>> 
>>> The variable separation is needed for PDE constrained optimization.  We may want 
>>> to separate constraints into state constraints, design constraints, and 
>>> joint state/design constraints.  For now, I would only consider design
>>> constraints.
>> 
>> This separation applies to any basic/nonbasic partition of decision variables, correct? It is needed for reduced-space methods; it is not needed for full-space methods. You could shim the full-space method with a reduced-space interface as well; I believe ROL already has this sort of interface. I think there is a lot of interest in reduced-space methods if they are robust and admit a wider range of formulations. LCL right now is too limited for the types of problems we want to solve (PDE-constrained plus additional design constraints that are not box constraints), and so we have to add ad hoc penalization methods to enforce these additional design constraints. This case was the motivation for adding SQPTR as a full-space method, and more methods need to be added. I’m also concerned based on the feedback I got at the IMA workshop on Frontiers in PDE-Constrained Optimization that LCL might not be performant enough of an algorithm to publish case studies for our applications. I can give more detailed feedback off-list on this point.
> 
> Its just a partitioning of the variables; solves can be reduced space or
> full space.

I agree; I also agree with Matt's observation that imposing this restriction at compile-time isn’t necessary; use FieldSplit instead.

> 
> I would argue that even for full space methods you will want to separate
> the variables by blocks.  If for nothing else than being able to
> construct preconditioners when necessary.

I agree. See my previous comment; also, this sort of block structure could be used for some separable problems, potentially obviating the need for some separate compile-time interfaces. It would be better to shift as much of this sort of specialization to compile-time options as possible.

> 
> If you buy into optimize then discretize, then my understanding is that
> you need to use a different norm for measuring the norm of the gradient
> of the Lagrangian with respect to the state variables.

I already do this in SQPTR, although only for the Hilbert space case. Assuming it eventually gets merged, the interfaces need to be revised for norm-reflexive convex Banach spaces so that all inner products become duality pairings. I am not sure if ROL does this, but the Heinkenschloss and Ridzal paper cited in the SQPTR comments suggests this generalization can be made.

> 
> Todd.
> 



More information about the petsc-dev mailing list