<div dir="ltr">Let me backup a bit.<div>I think you have an application that has a Cartesian, or a least fine, grid and you "have to implement a block structured grid approach".</div><div>Is this block structured solver well developed?</div><div>We have support for block structured (quad-tree) grids you might want to use. This is a common approach for block structured grids.</div><div><br></div><div>Thanks,</div><div>Mark</div><div><br></div><div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Mon, Jun 26, 2023 at 12:08 PM Barry Smith <<a href="mailto:bsmith@petsc.dev">bsmith@petsc.dev</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><br>
<br>
> On Jun 26, 2023, at 11:44 AM, Srikanth Sathyanarayana <<a href="mailto:srcs@mpcdf.mpg.de" target="_blank">srcs@mpcdf.mpg.de</a>> wrote:<br>
> <br>
> Dear PETSc developers,<br>
> <br>
> <br>
> I am currently working on a Gyrokinetic code where I essentially have to implement a block structured grid approach in one of the subdomains of the phase space coordinates. I have attached one such example in the x - v_parallel subdomains where I go from a full grid to a grid based on 4 blocks (divided along x direction) which is still Cartesian but misaligned across blocks (the grid is a very coarse representation). So the idea is to basically create a library for the existing solver and try to implement the block structured grid approach which mainly involves some sort of interpolation between the blocks to align the points.<br>
> <br>
> <br>
> I came up with an idea to implement this using DMDA. I looked into the old threads where you have suggested using DMComposite in order to tackle such problems although a clear path for the interpolation between the DM's was not clarified. Nonetheless, my main questions were:<br>
> <br>
> 1. Do you still suggest using DMComposite to approach this problem.<br>
<br>
Unfortunately, that is all we have for combining DM's. You can use unstructured, or structured or unstructed with quad-tree-type refinement but we don't have a "<br>
canned" approach for combining a bunch of structured grids together efficiently and cleanly (lots of issues come up in trying to design such a thing in a distributed memory environment since some blocks may need to live on different number of MPI ranks)<br>
> <br>
> 2. Is there a way to use DMDA where the user provides the allocation? My main problem is that I am not allowed to change the solvers data structure<br>
<br>
The allocation for what?<br>
> <br>
> 3. I looked into VecCreateMPIWithArray for the user provided allocation, however I am not very sure if this Vector can be used with the DMDA operations.<br>
<br>
Yes, you can use these variants to create vectors that you use with DMDA; so long as they have the correct dimensions. <br>
> <br>
> <br>
> Overall, I request you to please let me know what you think of this approach (using DMDA) and I would be grateful if you could suggest me any alternatives.<br>
> <br>
> <br>
> Thanks and regards,<br>
> <br>
> Srikanth<br>
> <Screenshot from 2023-06-26 17-24-32.png><br>
<br>
</blockquote></div>