[petsc-dev] exascale applications and PETSc usefulness

Barry Smith bsmith at mcs.anl.gov
Thu Sep 8 14:22:38 CDT 2016


     Thanks for the information. If there is specific things we can add to TAO that would help you out please let us know. We don't really have a TAO development team that can just implement new solvers but Todd is now spending some time on TAO and we always want to add new functionality that is actually useful for people as opposed to not useful.


> On Sep 8, 2016, at 1:36 PM, Oxberry, Geoffrey Malcolm <oxberry1 at llnl.gov> wrote:
> Todd,
> I did not see the final proposal. Most of the approaches I saw in the
> planning stages on the LLNL side involved internal codes (e.g., ALE3D,
> DIABLO); depending on the code, finite element or finite volume
> discretizations are used. Finite element approaches were more commonly
> suggested. As far as I can tell, PETSc is not used as a library inside
> these codes, although PETSc is used on-site in at least one other
> application. 
> I don't develop these codes, so I cannot speak to why specifically PETSc
> is not used, and I only use one or two of them, so I can’t speak much to
> specifics regarding time stepper/nonlinear solver/preconditioner/linear
> solver combinations.
> What I can say fairly confidently is that because few scalable
> optimization solver frameworks exist, and because PETSc does see some
> internal use, it is easier to convince my colleagues to use PETSc for
> optimization than other alternatives. I have less expertise in PDE solving
> than my colleagues, and thus less influence in that area than in
> optimization.
> LCL is currently used in one LLNL application that I know of, but it is
> cumbersome because LCL cannot directly model general nonlinear constraints
> containing only design variables. As I understand it, my colleagues
> manually penalized the constraints that cannot be modeled directly using
> LCL, and then use LCL to solve a sequence of penalized problems. It is not
> clear to me how they determine good values of penalty parameters. If an
> algorithm that computes a sequence of these penalty parameters such that
> solving the corresponding LCL subproblems yields a sequence of feasible
> solutions converging to a KKT point, that algorithm is a (possibly
> application-specific) nonlinear programming solver. Implementing
> production-ready nonlinear programming solvers capable of modeling these
> constraints would help us do more science per unit time by reducing the
> developer effort needed to produce the science. It’s probable that these
> algorithms also have better convergence properties, which would help us
> produce higher quality science as well.
> Geoff
> On 9/8/16, 7:07 PM, "Munson, Todd" <tmunson at mcs.anl.gov> wrote:
>> Geoff,
>> How are they modeling and solving their PDEs?
>> Todd.
>>>> 	€ Transforming Additive Manufacturing through Exascale Simulation
>>>> (TrAMEx), John Turner (ORNL) with LLNL, LANL, NIST
>>> PETSc did not come up in TrAMEx meetings I was involved in, and I do not
>>> believe it will be used. I will try to find out if plans have changed.
>>> It
>>> could be useful, and maybe the new stuff Todd is planning to add to TAO
>>> will help. In its current form, we simply can¹t model the nonlinear
>>> programs we would need to solve using the production-ready solvers in
>>> TAO
>>> without kludging something together. Such a kludge would not be a robust
>>> implementation. TAOIPM could work in principle, but my impression is
>>> that
>>> it still needs work. Some of the SQP stuff I was working on in PETSc
>>> started to address this gap also; my goal is to clean that up by the
>>> 18th.

More information about the petsc-dev mailing list