[petsc-users] newbie question on the parallel allocation of matrices

Treue, Frederik frtr at risoe.dtu.dk
Fri Dec 2 09:52:02 CST 2011



From: petsc-users-bounces at mcs.anl.gov [mailto:petsc-users-bounces at mcs.anl.gov] On Behalf Of Matthew Knepley
Sent: Friday, December 02, 2011 4:32 PM
To: PETSc users list
Subject: Re: [petsc-users] newbie question on the parallel allocation of matrices

On Fri, Dec 2, 2011 at 9:25 AM, Treue, Frederik <frtr at risoe.dtu.dk<mailto:frtr at risoe.dtu.dk>> wrote:


From: petsc-users-bounces at mcs.anl.gov<mailto:petsc-users-bounces at mcs.anl.gov> [mailto:petsc-users-bounces at mcs.anl.gov<mailto:petsc-users-bounces at mcs.anl.gov>] On Behalf Of Matthew Knepley
Sent: Friday, December 02, 2011 4:01 PM
To: PETSc users list
Subject: Re: [petsc-users] newbie question on the parallel allocation of matrices

On Fri, Dec 2, 2011 at 8:58 AM, Treue, Frederik <frtr at risoe.dtu.dk<mailto:frtr at risoe.dtu.dk>> wrote:


From: petsc-users-bounces at mcs.anl.gov<mailto:petsc-users-bounces at mcs.anl.gov> [mailto:petsc-users-bounces at mcs.anl.gov<mailto:petsc-users-bounces at mcs.anl.gov>] On Behalf Of Jed Brown
Sent: Friday, December 02, 2011 1:32 PM
To: PETSc users list
Subject: Re: [petsc-users] newbie question on the parallel allocation of matrices

On Fri, Dec 2, 2011 at 03:32, Treue, Frederik <frtr at risoe.dtu.dk<mailto:frtr at risoe.dtu.dk>> wrote:
OK, but that example seems to assume that you wish to connect only one matrix (the Jacobian) to a DA - I wish to specify many and I think I found this done in ksp ex39, is that example doing anything deprecated or will that work for me, e.g. with the various basic mat routines (matmult, matAXPY etc.) in a multiprocessor setup?

What do you mean by wanting many matrices? How do you want to use them? There is DMCreateMatrix() (misnamed DMGetMatrix() in petsc-3.2), which you can use as many times as you want.`

And this was the one I needed. However I have another question: What does DMDA_BOUNDARY_GHOSTED do, compared to DMDA_BOUNDARY_PERIODIC? From experience I now know that the PERIODIC option automagically does the right thing when I'm defining matrices so I can simply specify the same stencil at all points. Does DMDA_BOUNDARY_GHOSTED do something similar? And if so, how is it controlled, ie. How do I specify if I've got Neumann or Dirichlet conditions, and what order extrapolation you want, and so forth? And if not, does it then ONLY make a difference if I'm working with more than on processor, ie. If everything is sequential, is DMDA_BOUNDARY_GHOSTED and DMDA_BOUNDARY_NONE equivalent?

GHOSTED adds extra space at the boundary so you can always use the same stencil, but you decide what goes in there.

Does this apply to both matrices and vectors, ie. Will the ghost points be considered part of my computational domain or not?

The ghost nodes only exist in local vectors, not the global vectors for the solver.

OK? So how does one implement boundary conditions? Normally I would include (say) one extra point over the edge of the domain (the ghost point) and then implement the equation (if I start out with Ax=b, A and b known, x desired, and dirichlet boundary conditions)
x_G+x_1=2*b_B, where x_G is the unknown at the ghost point, x_1 is the unknown at the first "real" point, and b_B is my dirichlet boundary condition.
Thus, I need to a special stencil in the first and last row (ie. The -1 and nx row, with nx internal points) of my matrix, but this leads to memory errors. Is this possible while using GHOSTED? As I understand it, GHOSTED also deals with the MPI communication, so I'd like to retain it, instead of working with NONE.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20111202/ffcda6c6/attachment.htm>


More information about the petsc-users mailing list