[petsc-users] partition of DM Vec entries

Dave May dave.mayhem23 at gmail.com
Sat Oct 15 00:29:07 CDT 2016


On 15 October 2016 at 06:17, Dave May <dave.mayhem23 at gmail.com> wrote:

>
>
> On Saturday, 15 October 2016, Barry Smith <bsmith at mcs.anl.gov> wrote:
>
>>
>>   Unless the particles are more or less equally distributed over the the
>> entire domain any kind of "domain decomposition" approach is questionably
>> for managing the particles. Otherwise certain processes that have domains
>> that contain most of the particles will have a great deal of work, for all
>> of its particles, while domains with few particles will have little work. I
>> can see two approaches to alleviate this problem.
>>
>> 1) constantly adjust the sizes/locations of the domains to load balance
>> the particles per domain or
>>
>> 2)  parallelize the particles (some how) instead of just the geometry.
>>
>> Anyways, there is a preliminary DMSWARM class in the development version
>> of PETSc for helping to work with particles provided by Dave May. You might
>> look at it. I don't know if it would useful for you or not. IMHO software
>> library support for particle methods is still very primitive compared to
>> finite difference/element support, in other words we still have a lot to do.
>
>
> If you are using an SPH formulation with a constant smoothing length (such
> as for incompressible media), then DMSWARM will be extremely useful. It
> manages the assignment of fields on point clouds and managed data exchanges
> required for particle advection and gather operations from neighbor
> cells required for evaluating the SPH basis functions.
>
> DMSWARM is in the master branch. We would be happy if you want to be beta
> tester. The API is in its infancy and thus having a user play with what's
> there would be the best way to refine the design as required.
>
> Take a look at the examples and let us know if you need help.
>


Specifically look at these examples (in the order I've listed)

* src/dm/examples/tutorials/swarm_ex2.c
Demonstrates how to create the swarm, register fields within the swarm and
how to represent these fields as PETSc Vec objects.

* src/dm/examples/tutorials/swarm_ex3.c
This demonstrates how you push particles from one sub-domain to another.

* src/dm/examples/tutorials/swarm_ex1.c
This demonstrates how to define a collection operation to gather particles
from neighbour cells (cells being defined via DMDA)

There isn't a single complete example using a DMSWARM and DMDA for
everything required by SPH, but all the plumbing is in place.

Thanks,
  Dave


>
> Thanks,
>   Dave
>
>
>>
>>
>>   Barry
>>
>>
>>
>>
>>
>> > On Oct 14, 2016, at 9:54 PM, Sang pham van <pvsang002 at gmail.com> wrote:
>> >
>> > Hi Barry,
>> >
>> > Thank your for your answer. I am writing a parallel code for
>> smoothed-particle hydrodynamic, in this code I used a DMDA background mesh
>> for management of particles. Each DMDA cell manages a number of particles,
>> the number can change in both time and cell. In each time step, I need to
>> update position and velocity of particles in border cells to neighbor
>> partition. I think I can not use DMDA Vec to do this be cause the number of
>> particles is not the same in all ghost cells.
>> >
>> > I think I am able to write a routine do this work, but the code may be
>> quite complicated and not so "formal", I would be very appreciated if you
>> can suggest a method to solve my problem.
>> >
>> > Many thanks.
>> >
>> >
>> >
>> >
>> > On Sat, Oct 15, 2016 at 9:40 AM, Barry Smith <bsmith at mcs.anl.gov>
>> wrote:
>> >
>> >   Thanks, the question is very clear now.
>> >
>> >   For DMDA you can use DMDAGetNeighborsRank() to get the list of the
>> (up to) 9 neighbors of a processor. (Sadly this routine does not have a
>> manual page but the arguments are obvious). For other DM I don't think
>> there is any simple way to get this information. For none of the DM is
>> there a way to get information about what process is providing a specific
>> ghost cell.
>> >
>> >   It is the "hope" of PETSc (and I would think most parallel computing
>> models) that the details of exactly what process is computing neighbor
>> values should not matter for your own computation. Maybe if you provide
>> more details on how you wish to use this information we may have
>> suggestions on how to proceed.
>> >
>> >   Barry
>> >
>> >
>> >
>> > > On Oct 14, 2016, at 9:23 PM, Sang pham van <pvsang002 at gmail.com>
>> wrote:
>> > >
>> > > Hi Barry,
>> > >
>> > > In 2 processes case, the problem is simple, as I know all ghost cells
>> of partition 0 are updated from partition 1. However, in the case of many
>> processes, how do I know from which partitions ghost cells of partition 0
>> are updated? In other words, How can I know neighboring partitions of the
>> partition 0? and can I get a list of ghost cells managing by a neighboring
>> partition?
>> > > Please let me know if my question is still not clear.
>> > >
>> > > Many thanks.
>> > >
>> > >
>> > > On Sat, Oct 15, 2016 at 8:59 AM, Barry Smith <bsmith at mcs.anl.gov>
>> wrote:
>> > >
>> > > > On Oct 14, 2016, at 8:50 PM, Sang pham van <pvsang002 at gmail.com>
>> wrote:
>> > > >
>> > > > Hi,
>> > > >
>> > > > I am using DM Vec for a FV code, for some reasons, I want to know
>> partition of all ghost cells of a specific partition. is there a way do
>> that?
>> > >
>> > >   Could you please explain in more detail what you want, I don't
>> understand? Perhaps give a specific example with 2 processes?
>> > >
>> > >  Barry
>> > >
>> > >
>> > >
>> > > >
>> > > > Many thanks.
>> > > >
>> > > > Best,
>> > > >
>> > >
>> > >
>> >
>> >
>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20161015/a9300462/attachment-0001.html>


More information about the petsc-users mailing list