[petsc-dev] Integrating PFLOTRAN, PETSC & SAMRAI

Boyce Griffith griffith at cims.nyu.edu
Tue Jun 7 13:35:11 CDT 2011



On 6/7/11 2:24 PM, Philip, Bobby wrote:
> Jed:
>
> SAMRAI has a fairly typical structured AMR design.There is a list or array of refinement levels that together form the AMR hierarchy and
> each refinement level has a set of logically rectangular patches. Data is only contiguous on a patch. Patches on different refinement levels
> can overlap over the same physical domain. A SAMRAI vector in effect consists of a set of data pointers, one for each patch ordered within
> a refinement level and then by levels for the whole hierarchy. It is also possible to create a vector spanning a subset of levels. SAMRAI internally
> has several data types based on cell, vertex, edge. etc data centering and a SAMRAI vector can have several data components consisting of a
> mix of these data types as well as user data types (something I avail of). The multilevel nature of a SAMRAI Vector allows me to use multilevel
> solvers fairly naturally. The current interface to PETSc involves an adaptor that has within it a PETSc Vec and a SAMRAI Vector in much the
> way you describe. The overhead of constructing a native PETSc vector across all refinement levels would involve writing a global ordering
> scheme that numbers nodes/cells/edges etc (depending on the data centering) and then doing maps back and forth. For my larger simulations
> the cost for this would be too high because this would not be a one time cost. I would have to do this each time PETSc KSP made a call to
> one of my multilevel preconditioners such as FAC.

Would you really have to compute the indexing stuff for each call to the 
FAC preconditioner?  Or would it suffice to compute this stuff once for 
a given patch hierarchy configuration?

> However, in the shorter term we have a resource issue. There are not enough people to maintain the PETSc-SAMRAI interface and PFLOTRAN depends
> on it. I believe the current PETSc-SAMRAI interface is useful, we just need to figure out how to maintain it with minimal overhead so breakages
> are not time consuming.

I think that Jed and Barry's suggestions on having the "getVector" 
method/macro do the correct caching invalidation stuff ought to improve 
the reliability of the interface.  Keeping up-to-date with API changes 
is less easy to address.  This is why I usually only synch up with the 
final released PETSc.

-- Boyce



More information about the petsc-dev mailing list