<html><head><meta http-equiv="content-type" content="text/html; charset=utf-8"></head><body style="overflow-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;"><br><div><br><blockquote type="cite"><div>On Jun 26, 2023, at 5:12 PM, Srikanth Sathyanarayana <srikanth.sathyanarayana@mpcdf.mpg.de> wrote:</div><br class="Apple-interchange-newline"><div><meta http-equiv="content-type" content="text/html; charset=utf-8"><div style="overflow-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;">Dear Barry and Mark,<div><br></div><div>Thank you very much for your response.</div><div><br></div><div><blockquote type="cite"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin: 0px 0px 0px 0.8ex; border-left-width: 1px; border-left-style: solid; border-left-color: rgb(204, 204, 204); padding-left: 1ex;"> The allocation for what?</blockquote></div></blockquote>What I mean is that, we don’t want additional memory allocations through DMDA Vectors. I am not sure if it is even possible, basically we would want to map our existing vectors through VecCreateMPIWithArray for example and implement a way for it to interact with the DMDA structure so it can assist ghost updates for each block.</div></div></div></blockquote><div><br></div> So long as the vectors are the same size as those that DMDA would give you then they work just like you got them with DMDA.</div><div><br><blockquote type="cite"><div><div style="overflow-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;"><div> Further, figure out a way to also perform some kind of interpolation between the block boundaries before the ghost exchange.</div><div><br></div><div><blockquote type="cite"><div dir="ltr"><div>I think you have an application that has a Cartesian, or a least fine, grid and you "have to implement a block structured grid approach".</div><div>Is this block structured solver well developed?</div><div>We have support for block structured (quad-tree) grids you might want to use. This is a common approach for block structured grids.</div></div></blockquote>We would like to develop a multi-block block-structured grid library mainly to reduce the number of grid points used. We want to use PETSc mainly as some kind of a distributed data container to simplify the process of performing interpolations between the blocks and help with the ghost exchanges. Currently, we are not looking into any grid refinement techniques. </div></div></div></blockquote><div><br></div> I suggest exploring if there are other libraries that provide multi-block block-structured grid that you might use, possible in conjunction with the PETSc solvers. Providing a general multi-block block-structured grid library is a big complicated enterprise and PETSc does not provide such a thing. Certain parts can be hacked with DMDA and DMCOMPOSITE but not properly as a properly designed library would.</div><div><br></div><div><br><blockquote type="cite"><div><div style="overflow-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;"><div><br></div><div>Thanks,</div><div>Srikanth</div><div><br><div><br><blockquote type="cite"><div>On 26 Jun 2023, at 21:32, Mark Adams <mfadams@lbl.gov> wrote:</div><br class="Apple-interchange-newline"><div><div dir="ltr">Let me backup a bit.<div>I think you have an application that has a Cartesian, or a least fine, grid and you "have to implement a block structured grid approach".</div><div>Is this block structured solver well developed?</div><div>We have support for block structured (quad-tree) grids you might want to use. This is a common approach for block structured grids.</div><div><br></div><div>Thanks,</div><div>Mark</div><div><br></div><div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Mon, Jun 26, 2023 at 12:08 PM Barry Smith <<a href="mailto:bsmith@petsc.dev">bsmith@petsc.dev</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><br>
<br>
> On Jun 26, 2023, at 11:44 AM, Srikanth Sathyanarayana <<a href="mailto:srcs@mpcdf.mpg.de" target="_blank">srcs@mpcdf.mpg.de</a>> wrote:<br>
> <br>
> Dear PETSc developers,<br>
> <br>
> <br>
> I am currently working on a Gyrokinetic code where I essentially have to implement a block structured grid approach in one of the subdomains of the phase space coordinates. I have attached one such example in the x - v_parallel subdomains where I go from a full grid to a grid based on 4 blocks (divided along x direction) which is still Cartesian but misaligned across blocks (the grid is a very coarse representation). So the idea is to basically create a library for the existing solver and try to implement the block structured grid approach which mainly involves some sort of interpolation between the blocks to align the points.<br>
> <br>
> <br>
> I came up with an idea to implement this using DMDA. I looked into the old threads where you have suggested using DMComposite in order to tackle such problems although a clear path for the interpolation between the DM's was not clarified. Nonetheless, my main questions were:<br>
> <br>
> 1. Do you still suggest using DMComposite to approach this problem.<br>
<br>
Unfortunately, that is all we have for combining DM's. You can use unstructured, or structured or unstructed with quad-tree-type refinement but we don't have a "<br>
canned" approach for combining a bunch of structured grids together efficiently and cleanly (lots of issues come up in trying to design such a thing in a distributed memory environment since some blocks may need to live on different number of MPI ranks)<br>
> <br>
> 2. Is there a way to use DMDA where the user provides the allocation? My main problem is that I am not allowed to change the solvers data structure<br>
<br>
The allocation for what?<br>
> <br>
> 3. I looked into VecCreateMPIWithArray for the user provided allocation, however I am not very sure if this Vector can be used with the DMDA operations.<br>
<br>
Yes, you can use these variants to create vectors that you use with DMDA; so long as they have the correct dimensions. <br>
> <br>
> <br>
> Overall, I request you to please let me know what you think of this approach (using DMDA) and I would be grateful if you could suggest me any alternatives.<br>
> <br>
> <br>
> Thanks and regards,<br>
> <br>
> Srikanth<br>
> <Screenshot from 2023-06-26 17-24-32.png><br>
<br>
</blockquote></div>
</div></blockquote></div><br></div></div></div></blockquote></div><br></body></html>