[petsc-dev] Integrating PFLOTRAN, PETSC & SAMRAI
Philip, Bobby
philipb at ornl.gov
Tue Jun 7 13:28:28 CDT 2011
Jed:
Realized I never addressed your question on matrices. I do use matrices on each refinement level storing them as stencils at each grid point. However, I never form
a global matrix - this was something we did attempt a while ago for a particular problem and realized it is a nightmare. Most of my problems involve elliptic components
and I use the matrices on each level in this context while performing smooths. Again this is what is used in PFLOTRAN.
Hope this helps.
Regards,
Bobby
________________________________________
From: petsc-dev-bounces at mcs.anl.gov [petsc-dev-bounces at mcs.anl.gov] On Behalf Of Philip, Bobby [philipb at ornl.gov]
Sent: Tuesday, June 07, 2011 2:24 PM
To: For users of the development version of PETSc
Subject: Re: [petsc-dev] Integrating PFLOTRAN, PETSC & SAMRAI
Jed:
SAMRAI has a fairly typical structured AMR design.There is a list or array of refinement levels that together form the AMR hierarchy and
each refinement level has a set of logically rectangular patches. Data is only contiguous on a patch. Patches on different refinement levels
can overlap over the same physical domain. A SAMRAI vector in effect consists of a set of data pointers, one for each patch ordered within
a refinement level and then by levels for the whole hierarchy. It is also possible to create a vector spanning a subset of levels. SAMRAI internally
has several data types based on cell, vertex, edge. etc data centering and a SAMRAI vector can have several data components consisting of a
mix of these data types as well as user data types (something I avail of). The multilevel nature of a SAMRAI Vector allows me to use multilevel
solvers fairly naturally. The current interface to PETSc involves an adaptor that has within it a PETSc Vec and a SAMRAI Vector in much the
way you describe. The overhead of constructing a native PETSc vector across all refinement levels would involve writing a global ordering
scheme that numbers nodes/cells/edges etc (depending on the data centering) and then doing maps back and forth. For my larger simulations
the cost for this would be too high because this would not be a one time cost. I would have to do this each time PETSc KSP made a call to
one of my multilevel preconditioners such as FAC.
I like the approach Boyce has taken where he has Vec data storage on each level and constructs a global vector based on that. I think it is a good
idea and probably would be interested in supporting such an interface in future because as Boyce pointed out this could open the way to using
a lot of PETSc PCs as smoothers without writing my own. My only concern would be how to support the various data types.
However, in the shorter term we have a resource issue. There are not enough people to maintain the PETSc-SAMRAI interface and PFLOTRAN depends
on it. I believe the current PETSc-SAMRAI interface is useful, we just need to figure out how to maintain it with minimal overhead so breakages
are not time consuming.
Bobby
________________________________________
From: petsc-dev-bounces at mcs.anl.gov [petsc-dev-bounces at mcs.anl.gov] On Behalf Of Jed Brown [jed at 59A2.org]
Sent: Tuesday, June 07, 2011 1:43 PM
To: For users of the development version of PETSc
Subject: Re: [petsc-dev] Integrating PFLOTRAN, PETSC & SAMRAI
On Tue, Jun 7, 2011 at 19:05, Boyce Griffith <griffith at cims.nyu.edu> wrote:
> Another might be easier access to CUDA.
What do SAMRAI matrices look like? In particular, how are they constructed?
Your mention of CUDA offers a different approach. Instead of using the
non-contiguous storage directly, you could copy it to contiguous storage.
Thus you can make a Vec type (or adorn an existing Vec) with SAMRAI
information. Then you would have an interface with two functions
VecSAMRAIGetVector(Vec,SAMRAIVector*);
VecSAMRAIRestoreVector(Vec,SAMRAIVector*);
The first of these would actually create SAMRAI storage if it didn't exist
and copy from there from the Vec. If the SAMRAIVector is modified, a flag is
set in the Vec indicating that the copy "on the SAMRAI" is up to date. If
the Vec is accessed on the PETSc side via VecGetArray() or similar, the data
would be copied over. If it's modified on the PETSc side, the SAMRAI copy
would be marked as invalid.
Reasons for this instead of using a single copy in discontiguous storage:
1. All PETSc solver components would work.
2. Some operations will be faster using contiguous storage.
3. You can easily have a run-time option to switch.
The downside is possibly using more memory, but in most use cases, only a
couple Vecs are actually touched by the application. Most of the Vecs are in
the Krylov space or stored time steps. Those would be lighter and faster
using contiguous storage (PETSc native), and since VecSAMRAIGetVector()
would only be called on the ones the user interacts directly with, it
probably only costs a couple vectors to maintain the multiple copies (which
could mean less storage in total when summed over the whole Krylov space).
What do SAMRAI matrices look like? What is the assembly code like? Can we
preserve that API and get the data to go directly into a PETSc format?
More information about the petsc-dev
mailing list