[petsc-users] need help with vector interpolation on nonuniform DMDA grids

Dave May dave.mayhem23 at gmail.com
Wed Nov 7 17:25:03 CST 2018


On Tue, 6 Nov 2018 at 16:01, Matthew Knepley <knepley at gmail.com> wrote:

> On Tue, Nov 6, 2018 at 9:45 AM Francesco Magaletti <
> francesco_magaletti at fastwebnet.it> wrote:
>
>> Hi Matt,
>> thanks for the reply, but I didn’t catch exactly your point. What do you
>> mean with “first bin the points into cells, and then use the
>> interpolant from the discretization” ? To me it sounds like my “naive”
>> approach, but maybe I missed something in your suggestion.
>>
>
> I did not read the whole mail before. Your naive approach is exactly
> right. However, in parallel, you need to keep track of the bounds of each
> process, send the points to the right process, and then do local location
> and interpolation. It is some bookkeeping.
>
> Explaining this gave me another idea. I think that DMSwarm might do this
> automatically. You create a DMSwarm, stick in
> all the points you need to interpolate at, set the cellDM to the DMDA used
> for interpolation. Then you call DMSwarmMigrate()
> to put the points on the correct process, and then interpolate. Tell me if
> this does not work. I am Cc'ing the author Dave May
> in case I have made an error.
>

It will work using dmswarm but not exactly as Matt describes (although I
might have missed some details in your naive approach).

Here is why I think Matts exact definition is not quite what you want. When
you set the dmda on the swarm object, the swarm uses it for point location
and selects a particular communication pattern for the data exchange. The
point location indicates if a point was contained in the sub-domain. If the
answer is no, the point is scattered to all ranks which are neighbours of
the current sub-domain. The point location does not identify which
sub-domain contains each point.
Hence I think this comm pattern may not be sufficient for what you want to
do (in general)

The alternative approach which will work is to not attach the dmda to the
swarm. In this case, you are responsible to determine where the point
should go and assign the target rank value to an internal swarm variable
(textual name is "DMSwarm_rank"). In this mode you define the communication
pattern, cf. inferring it from the dmda decomposition.

Here is what I would do:
* I'll assume you want to interpolate from dmda2 to dmda1 and you have
fields Vec F2 and Vec F1 defined on dmda2 and dmda1.
* I'll assume that you ultimately want the interpolated field values to
live in F1. If that is not the case the procedure described below will be
slightly different.
* Below is some pseudo code. I'll write things swarm[i]->"xxx" to indicate
registered entires in the swam named "xxx" which live in the i-th slot of
the dmswarm's internal storage.

[1]
Create a swarm and register the following fields
"coor" (PetscReal)
"dmda_index" (PetscInt)

[2]
Get the sub-domain bounding box from dmda2 and broadcast these to all
ranks, so that every rank knows the bounds of all sub-domains associated
with dmda2

[3]
Set the local size of the swarm to match the number of locally owned points
in dmda1. Do not include the ghost points.
Copy all the coordinates from dmda1 into the registered swarm field "coor".
Copy the global index of each point in dmda1 into the registered field
"dmda_index"

[4]
Get the internally registered dmswarm field named "DMSwarm_rank".
for all points in the swarm, i
  swarm[i]->"DMSwarm_rank" = comm.rank
  for all bounding boxes of dmda2, b[k]
    if swarm[i]->"coor" is contained in b[k]
      swarm[i]->"DMSwarm_rank" = k
      break

[5]
Call DMSwarmMigrate(swarm,PETSC_TRUE) and you data will get shipped to
where you indicated it should go.
Now all the points you want to interpolate to should be located within the
correct sub-domains of dmda2.
Note we will remove the sent points from the local dmswarm storage

[6]
Get the registered swarm field "coor" and "dmda_index"
VecZeroEntries(F1)
for all points in the swarm, i
  locate which cell in dmda2 contains swarm[i]->"coor"
  /* interpolate F2 defined on dmda2 to swarm[i]->"coor" - call this
F_interp */
  F_interp = .....
  VecSetValue(F1,swarm[i]->"dmda_index",F_interp,INSERT_VALUES);

VecAssemblyBegin(F1)
VecAssemblyBegin(F2)


Hope this helps.

Thanks,
  Dave



>   Thanks,
>
>     Matt
>
>
>> My problem comes from the solution of a 1D time evolving pde coupled with
>> a hand-made mesh adapting algorithm. Therefore I’m already using a DMDA to
>> manage all the communication issues among processors. Every n timesteps I
>> move the grid and then I need to evaluate the old solution on the new grid
>> points, this is why I need an efficient way to do it. Is it feasible to let
>> DMDA’s objects speak with DMPlexes, in order to keep the previous code
>> structure unaltered and use DMPlex only for the interpolation routine?
>>
>> Thanks
>>
>>
>>
>> Il giorno 06/nov/2018, alle ore 15:14, Matthew Knepley <knepley at gmail.com>
>> ha scritto:
>>
>> On Tue, Nov 6, 2018 at 5:11 AM Francesco Magaletti via petsc-users <
>> petsc-users at mcs.anl.gov> wrote:
>>
>>> Dear all,
>>>
>>> I would like to ask you if is there an easy and efficient way in
>>> parallel (by using PetsC functions) to interpolate a DMDA vector associated
>>> with a nonuniform 1D grid to another DMDA vector with the same length but
>>> associated with a different nonuniform grid.
>>>
>>> Let me rephrase it to be as clearer as I can:
>>>
>>> I have two structured nonuniform 1D grids with coordinate vectors x[i]
>>> and y[i]. Both the domains have been discretized with the same number of
>>> points, but the coordinate vectors x and y are different. I have a
>>> discretized field u[i] = u(x[i]) and I would like to use these point values
>>> to evaluate the values u(y[i]) in the points of the second grid.
>>>
>>> I read on the manual pages that functions like DMCreateInterpolation or
>>> similar work only with different but uniform DMDAs. Did I understand
>>> correctly?
>>>
>>> A naive approach, with a serial code, could be to find the points x[i]
>>> and x[i+1] that surround the point y[j] for every j and then simply linear
>>> interpolating the values u[i] and u[i+1]. I suspect that this is not the
>>> most efficient way to do it. Moreover it won’t work in parallel since, in
>>> principle, I do not know beforehand how many ghost nodes could be necessary
>>> to perform all the interpolations.
>>>
>>> Thank you in advance for your help!
>>>
>>
>> This has not been written, but is not that hard. You would first bin the
>> points into cells, and then use the
>> interpolant from the discretization. This is how we do it in the
>> unstructured case. Actually, if you wrote
>> your nonuniform 1D meshes as DMPlexes, I think it would work right now
>> (have not tested).
>>
>>   Thanks,
>>
>>    Matt
>>
>>
>>> Francesco
>>
>>
>>
>> --
>> What most experimenters take for granted before they begin their
>> experiments is infinitely more interesting than any results to which their
>> experiments lead.
>> -- Norbert Wiener
>>
>> https://www.cse.buffalo.edu/~knepley/
>> <http://www.cse.buffalo.edu/~knepley/>
>>
>>
>>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
> https://www.cse.buffalo.edu/~knepley/
> <http://www.cse.buffalo.edu/~knepley/>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20181107/417426b5/attachment-0001.html>


More information about the petsc-users mailing list