On Fri, Dec 2, 2011 at 9:59 AM, Barry Smith <span dir="ltr"><<a href="mailto:bsmith@mcs.anl.gov">bsmith@mcs.anl.gov</a>></span> wrote:<br><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
<div class="im"><br>
On Dec 2, 2011, at 9:52 AM, Treue, Frederik wrote:<br>
<br>
><br>
><br>
> From: <a href="mailto:petsc-users-bounces@mcs.anl.gov">petsc-users-bounces@mcs.anl.gov</a> [mailto:<a href="mailto:petsc-users-bounces@mcs.anl.gov">petsc-users-bounces@mcs.anl.gov</a>] On Behalf Of Matthew Knepley<br>
> Sent: Friday, December 02, 2011 4:32 PM<br>
> To: PETSc users list<br>
> Subject: Re: [petsc-users] newbie question on the parallel allocation of matrices<br>
><br>
> On Fri, Dec 2, 2011 at 9:25 AM, Treue, Frederik <<a href="mailto:frtr@risoe.dtu.dk">frtr@risoe.dtu.dk</a>> wrote:<br>
><br>
><br>
> From: <a href="mailto:petsc-users-bounces@mcs.anl.gov">petsc-users-bounces@mcs.anl.gov</a> [mailto:<a href="mailto:petsc-users-bounces@mcs.anl.gov">petsc-users-bounces@mcs.anl.gov</a>] On Behalf Of Matthew Knepley<br>
> Sent: Friday, December 02, 2011 4:01 PM<br>
> To: PETSc users list<br>
> Subject: Re: [petsc-users] newbie question on the parallel allocation of matrices<br>
><br>
> On Fri, Dec 2, 2011 at 8:58 AM, Treue, Frederik <<a href="mailto:frtr@risoe.dtu.dk">frtr@risoe.dtu.dk</a>> wrote:<br>
><br>
><br>
> From: <a href="mailto:petsc-users-bounces@mcs.anl.gov">petsc-users-bounces@mcs.anl.gov</a> [mailto:<a href="mailto:petsc-users-bounces@mcs.anl.gov">petsc-users-bounces@mcs.anl.gov</a>] On Behalf Of Jed Brown<br>
> Sent: Friday, December 02, 2011 1:32 PM<br>
> To: PETSc users list<br>
> Subject: Re: [petsc-users] newbie question on the parallel allocation of matrices<br>
><br>
> On Fri, Dec 2, 2011 at 03:32, Treue, Frederik <<a href="mailto:frtr@risoe.dtu.dk">frtr@risoe.dtu.dk</a>> wrote:<br>
> OK, but that example seems to assume that you wish to connect only one matrix (the Jacobian) to a DA – I wish to specify many and I think I found this done in ksp ex39, is that example doing anything deprecated or will that work for me, e.g. with the various basic mat routines (matmult, matAXPY etc.) in a multiprocessor setup?<br>
><br>
> What do you mean by wanting many matrices? How do you want to use them? There is DMCreateMatrix() (misnamed DMGetMatrix() in petsc-3.2), which you can use as many times as you want.`<br>
><br>
> And this was the one I needed. However I have another question: What does DMDA_BOUNDARY_GHOSTED do, compared to DMDA_BOUNDARY_PERIODIC? From experience I now know that the PERIODIC option automagically does the right thing when I’m defining matrices so I can simply specify the same stencil at all points. Does DMDA_BOUNDARY_GHOSTED do something similar?<br>
<br>
</div> No, nothing to do with matrices because that extra point is a fixed (Dirichlet) value and the so the derivative of the contribution for that value is zero.<br>
<div class="im"><br>
> And if so, how is it controlled, ie. How do I specify if I’ve got Neumann or Dirichlet conditions, and what order extrapolation you want, and so forth? And if not, does it then ONLY make a difference if I’m working with more than on processor, ie. If everything is sequential, is DMDA_BOUNDARY_GHOSTED and DMDA_BOUNDARY_NONE equivalent?<br>
<br>
</div>No, the are not. This option has nothing to do with parallelism.<br>
<div class="im">><br>
> GHOSTED adds extra space at the boundary so you can always use the same stencil, but you decide what goes in there.<br>
><br>
> Does this apply to both matrices and vectors, ie. Will the ghost points be considered part of my computational domain or not?<br>
><br>
> The ghost nodes only exist in local vectors, not the global vectors for the solver.<br>
><br>
> OK? So how does one implement boundary conditions? Normally I would include (say) one extra point over the edge of the domain (the ghost point) and then implement the equation (if I start out with Ax=b, A and b known, x desired, and dirichlet boundary conditions)<br>
> x_G+x_1=2*b_B, where x_G is the unknown at the ghost point, x_1 is the unknown at the first “real” point, and b_B is my dirichlet boundary condition.<br>
> Thus, I need to a special stencil in the first and last row (ie. The -1 and nx row, with nx internal points) of my matrix, but this leads to memory errors. Is this possible while using GHOSTED? As I understand it, GHOSTED also deals with the MPI communication, so I’d like to retain it, instead of working with NONE.<br>
<br>
</div> Ghosted has nothing to do with MPI communication! Ghosted vs None is only about the physical boundary, nothing to do with the boundary between domains.<br>
<br>
Ghosted is only for the (nonlinear) function evaluation. It is not for storage of the Jacobian/matrix.<br>
<br>
I suggest just using NONE and make your life and understanding easier. You can do Neuman and Dirichlet boundary conditions with none just fine.</blockquote><div><br></div><div>You can see us apply Dirichlet conditions with NONE in SNES ex5</div>
<div><br></div><div> Matt</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;"><font color="#888888"><br>
Barry<br>
<br>
<br>
<br>
<br>
</font></blockquote></div><br><br clear="all"><div><br></div>-- <br>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>
-- Norbert Wiener<br>