Thanks for the response Matt. I am clearly a little slow to respond as I have been working on other things.<div><br></div><div>Right now there is nothing in particular that isn't working for us, since we have really started things yet.</div>
<div><br></div><div>The interaction between our blocks can be complicated, but we don't use ghost cells at all and interaction is done weakly through interfaces so I would think that this will simplify things (we use summation-by-parts finite difference methods with the simultaneous approximation term method if that happens to mean anything to you). Haven't really thought through our Jacobians yet, but the nonlinearities in our problems do arise through the coupling terms.</div>
<div><br><div class="gmail_quote">On Sun, Apr 15, 2012 at 2:54 PM, Matthew Knepley <span dir="ltr"><<a href="mailto:knepley@gmail.com" target="_blank">knepley@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class="im">On Sun, Apr 15, 2012 at 4:24 PM, Jeremy Kozdon <span dir="ltr"><<a href="mailto:jkozdon@stanford.edu" target="_blank">jkozdon@stanford.edu</a>></span> wrote:<br></div><div class="gmail_quote"><div class="im">
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Hello All,<div><br></div><div><div>We are considering using petsc for a new code we are developing which involves multiblock grids and finite difference methods. I am still pretty new to petsc (still reading the manual), so wondering where a good place to look for examples on this is (since I assume that we are not the first to consider this). Not sure if there is a good place in the petsc source to consider this, one of the libraries examples perhaps, or another opensource package.</div>
<div><br></div><div>One of the things we want to be able to do is assign different numbers of processors to different blocks based on the work load we estimate for that block. Currently we do this by decomposing the global communicator into a block communicators, but if I understand correctly this is not the correct approach to take with petsc since we will need to use DMComposite which relies on collective operations.</div>
</div></blockquote><div><br></div></div><div>The Cartesian grid interface is the DMDA class in PETSc. You can prescribe whatever processor layout you want. The key</div><div>question is how will the blocks interact. The DMComposite is fine for putting blocks together to generate a combined residual,</div>
<div>but the Jacobian interface is not as fleshed out, for example it does not automatically allocate the off-diagonal blocks from</div><div>block interaction. I don't think there are any problems with collectives in DMComposite. Do you have a specific thing that</div>
<div>does not work for you?</div><div class="im"><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><div>Currently everything we are doing is with structured cartesian grids, but in the future we will also need to be able to handle unstructured grids which are coupled to the structured grids.</div>
</div></blockquote><div><br></div></div><div>This can also be done with DMComposite, using DMComplex for the unstructured grids.</div><div><br></div><div> Matt</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div><div>Thanks!</div><span><font color="#888888"><div>Jeremy</div><span class="HOEnZb"><font color="#888888">
</font></span></font></span></div><span class="HOEnZb"><font color="#888888">
</font></span></blockquote></div><span class="HOEnZb"><font color="#888888"><br><br clear="all"><div><br></div>-- <br>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>
-- Norbert Wiener<br>
</font></span></blockquote></div><br></div>