[petsc-users] Using DMDA for a block-structured grid approach
Barry Smith
bsmith at petsc.dev
Mon Jun 26 16:39:09 CDT 2023
> On Jun 26, 2023, at 5:12 PM, Srikanth Sathyanarayana <srikanth.sathyanarayana at mpcdf.mpg.de> wrote:
>
> Dear Barry and Mark,
>
> Thank you very much for your response.
>
>>> The allocation for what?
> What I mean is that, we don’t want additional memory allocations through DMDA Vectors. I am not sure if it is even possible, basically we would want to map our existing vectors through VecCreateMPIWithArray for example and implement a way for it to interact with the DMDA structure so it can assist ghost updates for each block.
So long as the vectors are the same size as those that DMDA would give you then they work just like you got them with DMDA.
> Further, figure out a way to also perform some kind of interpolation between the block boundaries before the ghost exchange.
>
>> I think you have an application that has a Cartesian, or a least fine, grid and you "have to implement a block structured grid approach".
>> Is this block structured solver well developed?
>> We have support for block structured (quad-tree) grids you might want to use. This is a common approach for block structured grids.
> We would like to develop a multi-block block-structured grid library mainly to reduce the number of grid points used. We want to use PETSc mainly as some kind of a distributed data container to simplify the process of performing interpolations between the blocks and help with the ghost exchanges. Currently, we are not looking into any grid refinement techniques.
I suggest exploring if there are other libraries that provide multi-block block-structured grid that you might use, possible in conjunction with the PETSc solvers. Providing a general multi-block block-structured grid library is a big complicated enterprise and PETSc does not provide such a thing. Certain parts can be hacked with DMDA and DMCOMPOSITE but not properly as a properly designed library would.
>
> Thanks,
> Srikanth
>
>
>> On 26 Jun 2023, at 21:32, Mark Adams <mfadams at lbl.gov> wrote:
>>
>> Let me backup a bit.
>> I think you have an application that has a Cartesian, or a least fine, grid and you "have to implement a block structured grid approach".
>> Is this block structured solver well developed?
>> We have support for block structured (quad-tree) grids you might want to use. This is a common approach for block structured grids.
>>
>> Thanks,
>> Mark
>>
>>
>>
>> On Mon, Jun 26, 2023 at 12:08 PM Barry Smith <bsmith at petsc.dev <mailto:bsmith at petsc.dev>> wrote:
>>>
>>>
>>> > On Jun 26, 2023, at 11:44 AM, Srikanth Sathyanarayana <srcs at mpcdf.mpg.de <mailto:srcs at mpcdf.mpg.de>> wrote:
>>> >
>>> > Dear PETSc developers,
>>> >
>>> >
>>> > I am currently working on a Gyrokinetic code where I essentially have to implement a block structured grid approach in one of the subdomains of the phase space coordinates. I have attached one such example in the x - v_parallel subdomains where I go from a full grid to a grid based on 4 blocks (divided along x direction) which is still Cartesian but misaligned across blocks (the grid is a very coarse representation). So the idea is to basically create a library for the existing solver and try to implement the block structured grid approach which mainly involves some sort of interpolation between the blocks to align the points.
>>> >
>>> >
>>> > I came up with an idea to implement this using DMDA. I looked into the old threads where you have suggested using DMComposite in order to tackle such problems although a clear path for the interpolation between the DM's was not clarified. Nonetheless, my main questions were:
>>> >
>>> > 1. Do you still suggest using DMComposite to approach this problem.
>>>
>>> Unfortunately, that is all we have for combining DM's. You can use unstructured, or structured or unstructed with quad-tree-type refinement but we don't have a "
>>> canned" approach for combining a bunch of structured grids together efficiently and cleanly (lots of issues come up in trying to design such a thing in a distributed memory environment since some blocks may need to live on different number of MPI ranks)
>>> >
>>> > 2. Is there a way to use DMDA where the user provides the allocation? My main problem is that I am not allowed to change the solvers data structure
>>>
>>> The allocation for what?
>>> >
>>> > 3. I looked into VecCreateMPIWithArray for the user provided allocation, however I am not very sure if this Vector can be used with the DMDA operations.
>>>
>>> Yes, you can use these variants to create vectors that you use with DMDA; so long as they have the correct dimensions.
>>> >
>>> >
>>> > Overall, I request you to please let me know what you think of this approach (using DMDA) and I would be grateful if you could suggest me any alternatives.
>>> >
>>> >
>>> > Thanks and regards,
>>> >
>>> > Srikanth
>>> > <Screenshot from 2023-06-26 17-24-32.png>
>>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20230626/4eda838d/attachment.html>
More information about the petsc-users
mailing list