particle tracking on a DA
Boyce Griffith
griffith at cims.nyu.edu
Wed Sep 26 13:34:11 CDT 2007
Hi, Rich --
I essentially do Lagrangian particle tracking in my AMR immersed
boundary code using PETSc, although if you are looking for a
quick-and-dirty solution, I'm pretty sure you don't want to do things
the way that I do. Also, I apologize in advance for the rambling response.
The basic idea is very simple:
==============================
Store the coordinates of the Lagrangian points in a ghosted Vec, and use
an AO object to keep track of the mapping from the current distribution
of Lagrangian points to some arbitrary initial ordering of points.
The devil's in the details:
===========================
During the computation, each point is assigned to the processor that
owns the Cartesian grid subdomain in which that point is presently
located. The ghosted Vec is constructed to provide the coordinates of
all Lagrangian points that lie within the ghost cell region of each
Cartesian grid subdomain. I think of each processor as "owning" only
those points that lie within the interior of the Cartesian grid
subdomain. In particular, processors should only modify data associated
with Lagrangian points that they "own". Updated values associated with
the ghost points must be provided via VecGhostUpdateBegin /
VecGhostUpdateEnd.
The drawback of this approach is that you have to re-distribute the
Lagrangian point data at regular intervals, as determined by the width
of your Lagrangian point ghost cell region and a CFL-type restriction.
E.g., if your CFL number is 2 and your ghost cell region is two cells
wide, you must redistribute points every timestep. If your CFL number
is 0.1 and your ghost cell region is one cell wide, your must
redistribute data at least once every 10 timesteps.
The advantage to doing things this way is that the code required to
update the positions of the Lagrangian points is basically a
single-processor code. All you have to do is provide ghost cell values
for the Cartesian grid velocity field. This ghost cell width must be
sufficiently large to allow you to interpolate the velocities to the
positions of the Lagrangian points that are "owned" by the processor.
E.g., if you are using bilinear interpolation and node-centered
velocities, you probably can get away with a ghost cell width of 1 for
the velocity field, depending on the CFL number.
When it is time to redistribute the Lagrangian data, it is easy to
construct a VecScatter that will perform the data redistribution using
AO. The trick is that you have to know which points belong to which
processor before you can build the VecScatter that redistributes
everything, and I'm not sure if it is straightforward to do this using
only PETSc data structures. I do this by storing in each Cartesian grid
cell a list of the Lagrangian points that are located in that cell. It
should be fairly easy to manage this data efficiently using an
appropriate STL container in C++ (e.g., map). Note that these lists
must be available both for the interior grid cells as well as the grid
cells in the ghost cell region of the subdomain.
Summarizing the algorithm:
==========================
During each timestep:
- On each processor, update the Lagrangian coordinates of all points
that are presently assigned to that processor. (Note that the physical
positions of these points may have drifted outside of the extents of the
subdomain, but that is OK as long as your Cartesian grid velocity ghost
cell width is sufficiently large.)
When it is time to redistribute points:
- Communicate the ghosted Lagrangian coordinate information,
- Using the Lagrangian coordinate information, update the lists of
Lagrangian point indices in each Cartesian grid cell in the interior of
each Cartesian subdomain,
- Communicate these lists to the ghost cell regions of all of the
Cartesian subdomains, and
- Construct a new AO, build the appropriate VecScatter, and
redistribute your Lagrangian coordinate data.
Good luck,
-- Boyce
Richard Katz wrote:
> Hi All,
>
> Is anyone aware of PETSc particle tracking code? I'm looking for
> something that will do Lagrangian particle tracking in parallel for 2D
> fluid flow on a regular cartesian mesh.
>
> If this doesn't exist, I may try to create it---quick and dirty style.
> My thinking was to a master-slaves parallelism: keep a complete
> directory of the particle locations on node 0 then send the entire
> directory to each node after each timestep to do an update. Each node
> updates any points that fall into it's subdomain, flags those particle
> entries, then returns the entire directory to node 0. Node 0 merges the
> updates into the master directory.
>
> This isn't scalable but it is simpler to program. Does anyone have any
> suggestions about strategy, implementation, PETSc tools that would
> facilitate this, etc?
>
> Cheers
> Rich
>
> Richard Foa Katz NSF IRFP Postdoc
> http://www.damtp.cam.ac.uk/user/rfk22
>
>
More information about the petsc-dev
mailing list