[Petsc-trilinos-discussion] Scope and requirements

Barry Smith bsmith at mcs.anl.gov
Wed Nov 20 22:15:48 CST 2013


   Hmm, the PETSc wrapper for ML is rather clunky and fragile (meaning it is difficult to change safely, that is without breaking something else in the process). This could be for several reasons

    1) ML has evolved and doesn’t have the clearest documentation
    2) there isn’t a great match between the abstractions/code organization in ML/Trilinos and PETSc
    3) the interface was done “by hand” as needed each time to get a bit more functionality across and hence is a bit ad hoc.

   I hate to think of having many of these fragile ad hoc interfaces hanging around. So how could this be done in scalable maintainable way? If we understand the fundamental design principles of the two packages to determine commonalities (and possibly troublesome huge differences) we may be able to see what changes could be made to make it easier to mate plugins from the two sides. So a brief discussion of PETSc object life cycles for the Trilinos folks. If they can produce something similar for Trilinos that would help us see the sticky points. 

     Most important PETSc objects have the following life cycles (I’ll use Mat as the example class, same thing holds for other classes, like nonlinear solves, preconditioners….)

     Mat mat = MatCreate(MPI_Comm)
     MatSetXXX()                                         // can set come generic properties of the matrix, like size
     MatSetType()    // instantiate the actual class like compressed sparse row, or matrix free or ….
     MatSetYYY()      // set generic properties of the matrix or ones specific to the actual class instantiated
     MatSetFromOptions()   // allow setting options from the command line etc for the matrix
     MatSetUp()        //  “setup” the up the matrix so that actual methods may be called on the matrix 

         In some sense all of the steps above are part of the basic constructor of the object (note at this point we still don’t have any entries in the matrix)
         Also at this point the “size” and parallel layout of the matrix (for solvers the size of the vectors and matrices it uses) is set in stone and cannot be changed
         (without an XXXReset()).

      MatSetValues()
      MatAssemblyBegin/End()    // phases to put values into matrices with explicit storage of entries

         Once this is done one can perform operations with the matrix

      MatMult() 
      etc

      MatDestroy() or 
      MatReset()         // cleans out the object of everything related to the size/parallel layout of the vectors/matrices but leaves the type and options that have been
                                    set, this is to allow one to use the same (solver) object again for a different size problem (due to grid refinement or whatever) without 
                                    needing to recreate the object from scratch.

     There are a few other things like serialization but I don’t think they matter in this discussion. There is reference counting so you can pass objects into other objects etc and the objects will be kept around automatically until the reference counts go down to zero. If you have a class that provides these various stages then it is not terribly difficult to wrap them up to look like PETSc objects (what Jed called a plugin). In fact for ML we have a PCCreate_ML() PCSetUp_ML() PCDestroy_ML() etc. 

      So one way to program with PETSc is to write code that manages the life cycles of whatever objects are needed. For example you create a linear solver object, a matrix object, some vector objects, set the right options, fill them up appropriately, call the solver and then cleanup. Same with nonlinear solvers, ODE integrators, eigensolvers. 

      For “straightforward” applications this model can often be fine. When one wants to do more complicated problems or use algorithms that require more information about the problem such as geometric multigrid and “block” solvers (here I mean block solvers like for Stokes equation and “multi physics” problems not block Jacobi or multiple right hand side “block solvers") requiring the user to mange the life cycles of all the vectors, matrices, auxiliary vector and matrices (creating them, giving them appropriate dimensions, hooking them together, filling them with appropriate values, and finally destroying them) is asking too much of the users.  PETSc has the DM object, which can be thought of as a “factory” for the correctly sized sub vectors and matrices needed for the particular problem being solved. The DM is given to the solver then the solver queries the DM to create whatever of those objects it needs in the solution process. For example with geometric multigrid the PCMG asks the DM for each coarser grid matrix. For a Stokes problem PCFIELDSPLIT can ask for (0,0) part of the operator, or the (1,1) etc and build up the solver from the objects provided by the DM.  Thus a typical PETSc program creates an appropriate solver object (linear, nonlinear, ODE, eigen), creates a DM for the problem, passes the DM to the solver and during the solver set up it obtains all the information it needs from the DM and solves the problem. Obviously the DM needs information about the mesh and PDE being solved. 

   I am not writing this to advocate making Trilinos follow the same model (though I think you should :-)) but instead to develop a common understanding of what is common in our approaches (and can be leveraged) and what is fundamentally different (and could be troublesome). For example, the fact that you have a different approach to creating Trilinos objects (using C++ factories) is superficially very different from what PETSc does, but that may not really matter.

   Barry



On Nov 20, 2013, at 5:16 PM, Jed Brown <jedbrown at mcs.anl.gov> wrote:

> Barry Smith <bsmith at mcs.anl.gov> writes:
>>   One “suggestion” is, could there be a “higher-level” interface that
>>   people could use that incorporated either PETSc or Trilinos
>>   underneath? A difficulty I see with that approach is that design
>>   decisions (either in C or C++) at one level of the stack permeate
>>   the entire stack and users have to see more then just the top level
>>   of the stack from our libraries. For example,
> 
> I think that if the program managers want better integration and an
> easier transition between packages, that instead of creating a new
> high-level interface, we should build out our cross-package plugin
> support.  For example, ML is a popular solver among PETSc users, but we
> could add support for more Trilinos preconditioners and solvers, and
> even support assembling directly into a [TE]petra matrix (PETSc matrices
> are more dynamic, but I still think this can be done reasonably).  We
> probably need to talk some about representing block systems, but I don't
> think the issues are insurmountable.
> 
> Then applications can freely choose whichever package they find more
> convenient (based on experience, types of solvers, implementation
> language, etc) with confidence that they will be able to access any
> unique features of the other package.  When coupling multiple
> applications using different solver packages, the coupler should be able
> to choose either package to define the outer solve, with the same code
> assembling either a monolithic matrix or a split/nested matrix with
> native preconditioners within blocks.
> 
> As I see it, there is a fair amount of duplicate effort by packages
> (such as libMesh, Deal.II, FEniCS) that ostensibly support both Trilinos
> and PETSc, but were not written by "solvers people" and are managing the
> imperfect correspondence themselves.  The unified interfaces that these
> projects have built are generally less capable than had they committed
> entirely to either one of our interfaces.
> 
> 
> It will take some effort to implement this interoperability and it's
> hard to sneak in under "basic research" like a lot of other software
> maintenance tasks, but I think that providing it within our own plugin
> systems is less work and can be supported better than any alternative.



More information about the Petsc-trilinos-discussion mailing list