[petsc-users] parallel interpolation?
Gideon Simpson
gideon.simpson at gmail.com
Tue Feb 17 09:41:43 CST 2015
This warrants a deeper description. I was thinking about trying to use petsc to solve a 1D PDE problem using the equidistribution principle for adaptive meshing. Briefly, the idea is that given a function u(x), where x is the physical coordinate, a monitor function, w[u(x)]>0, is introduced such that
y(x) = \int_{0}^x w[u(s)] ds /\int_{0}^{xmax} w[u(s)] ds
where C is a constant, and y is the computational coordinate in [0,1]. essentially, you want to pick the physical mesh points, x_i such that
\int_{x_i}^{x_{i+1}} w[u(s)] ds = constant
then the computational mesh points are uniformly spaced. Solving this can be posed as a nonlinear elliptic equation, in general dimension, though it simplifies further in 1D. In a paper by Gavish and Ditkowski, for the 1D problem, they came up with the idea that you could do the following:
Given a starting guess on where the physical coordinates should be, let
F_i = \int_0^{x_i} w[u(s)]ds/\int_0^{x_N} w[u(s)]ds
Then 0= F_0 < F_1 <… <F_N=1. Rather than try to solve the nonlinear system for the x_i such that the successive differences of F_{i+1} - F_i are uniformly spaced, they propose to do:
interpolate (F_i, x_i) onto the uniform mesh for y, and the interpolated x_i’s should approximately satisfy the equidistribution principle. This can be iterated, to improve the equidistribution.
Doing the quadrature with distributed vectors is not too hard. The question then becomes how to do the interpolation in parallel. As I was suggesting in the earlier example, if y_i = i * h, is the uniform mesh, it may be that y_i could be on proc 0, but the values of {F_j} that this lies between are on proc 1. There is monotonicity at work here, but I’m not sure I can really say, when defining ghost points, how things will spread out.
> On Feb 17, 2015, at 10:11 AM, Matthew Knepley <knepley at gmail.com> wrote:
>
> On Tue, Feb 17, 2015 at 8:15 AM, Gideon Simpson <gideon.simpson at gmail.com <mailto:gideon.simpson at gmail.com>> wrote:
> I’m gathering from your suggestions that I would need, a priori, knowledge of how many ghost points I would need, is that right?
>
> We have to be more precise about a priori. You can certainly create a VecScatter on the fly every time
> if your communication pattern is changing. However, how will you know what needs to be communicated.
>
> Matt
>
> -gideon
>
>> On Feb 17, 2015, at 9:10 AM, Matthew Knepley <knepley at gmail.com <mailto:knepley at gmail.com>> wrote:
>>
>> On Tue, Feb 17, 2015 at 7:46 AM, Gideon Simpson <gideon.simpson at gmail.com <mailto:gideon.simpson at gmail.com>> wrote:
>> Suppose I have data in Vec x and Vec y, and I want to interpolate this onto Vec xx, storing the values in Vec yy. All vectors have the same layout. The problem is that, for example, some of the values in xx on processor 0 may need the values of x and y on processor 1, and so on. Aside from just using sequential vectors, so that everything is local, is there a reasonable way to make this computation?
>>
>> At the most basic linear algebra level, you would construct a VecScatter which mapped the pieces you need from other processes into a local vector along with the local portion, and you would use that to calculate values, which you then put back into your owned portion of a global vector. Thus local vectors have halos and global vectors do not.
>>
>> If you halo regions (values you need from other processes) have a common topology, then we have simpler
>> support that will make the VecScatter for you. For example, if your values lie on a Cartesian grid and you
>> just need neighbors within distance k, you can use a DMDA to express this and automatically make the
>> VecScatter. Likewise, if you values lie on an unstructured mesh and you need a distance k adjacency,
>> DMPlex can create the scatter for you.
>>
>> If you are creating the VecScatter yourself, it might be easier to use the new PetscSF instead since it only needs one-sided information, and performs the same job. This is what DMPlex uses to do the communication.
>>
>> Thanks,
>>
>> Matt
>>
>> -gideon
>>
>>
>>
>> --
>> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.
>> -- Norbert Wiener
>
>
>
>
> --
> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.
> -- Norbert Wiener
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20150217/99c99495/attachment.html>
More information about the petsc-users
mailing list