[petsc-users] newbie question on the parallel allocation of matrices
Treue, Frederik
frtr at risoe.dtu.dk
Fri Dec 2 08:58:58 CST 2011
From: petsc-users-bounces at mcs.anl.gov [mailto:petsc-users-bounces at mcs.anl.gov] On Behalf Of Jed Brown
Sent: Friday, December 02, 2011 1:32 PM
To: PETSc users list
Subject: Re: [petsc-users] newbie question on the parallel allocation of matrices
On Fri, Dec 2, 2011 at 03:32, Treue, Frederik <frtr at risoe.dtu.dk<mailto:frtr at risoe.dtu.dk>> wrote:
OK, but that example seems to assume that you wish to connect only one matrix (the Jacobian) to a DA – I wish to specify many and I think I found this done in ksp ex39, is that example doing anything deprecated or will that work for me, e.g. with the various basic mat routines (matmult, matAXPY etc.) in a multiprocessor setup?
What do you mean by wanting many matrices? How do you want to use them? There is DMCreateMatrix() (misnamed DMGetMatrix() in petsc-3.2), which you can use as many times as you want.`
And this was the one I needed. However I have another question: What does DMDA_BOUNDARY_GHOSTED do, compared to DMDA_BOUNDARY_PERIODIC? From experience I now know that the PERIODIC option automagically does the right thing when I’m defining matrices so I can simply specify the same stencil at all points. Does DMDA_BOUNDARY_GHOSTED do something similar? And if so, how is it controlled, ie. How do I specify if I’ve got Neumann or Dirichlet conditions, and what order extrapolation you want, and so forth? And if not, does it then ONLY make a difference if I’m working with more than on processor, ie. If everything is sequential, is DMDA_BOUNDARY_GHOSTED and DMDA_BOUNDARY_NONE equivalent?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20111202/69404f93/attachment.htm>
More information about the petsc-users
mailing list