common nodla points on neighbouring subdomains
Barry Smith
bsmith at mcs.anl.gov
Wed May 17 07:53:00 CDT 2006
Thomas,
I think you might misunderstand the function of XXXSetValuesLocal().
What happens with them is the index is mapped to the global index and
the value is then put into the "correct" global location, so some values
will be shipped off to the "owning" process.
1) If two processes that
"share" a vertex both generate the entire stiffness for that vertex
(that is, each process loops over all the elements containing the
vertex summing the stiffness for that vertex) then the result will
be exactly twice the expected value (since one process will send the
result to the "owning" process). I THINK this is what you are doing.
2) if, on the other hand, each process generates only the part of the stiffness
for the vertex from elements that it "owns" then the global matrix will be
correct, BUT the communication of matrix elements to the "owning" process
is done. If you DO NOT want the communication to the owning process but
want to keep part of the stiffness on each process (for a shared vertex)
I would call this the unassembled form. This is done by some people but
is not really supported by PETSc.
I think you may want to consider NOT having "shared" vertices (that is either 1
if you are using that or 2). This may not require much change to what
you do not want to change but will allow you to easily use all of the
PETSc solvers. The problem with "unassembled" approaches is that they
save a bit of communication in generating the stiffness but then (since
the matrix is never assembled) cannot form much of anything in terms of
preconditioners, thus you do lots of linear solver iterations with
communication. Communicating the stiffness initially (if you have a good
partitioning of elements) is really not much communication and IMHO
is the wrong place to optimize.
I hope my response is not too confusing, I may have misunderstood
parts of your question.
Barry
On Wed, 17 May 2006, Thomas Geenen wrote:
> Dear Petsc users,
>
> I am solving a system of equations Ax=b resulting from a Finite element
> package. The parallelization strategy for this program is to minimize
> communication. therefore each subdomain has its own copy of common nodal
> points between subdomains (local numbering). This strategy is already
> implemented and I don't want to change that
>
> I fill the matrix and right hand side with a call to MatSetValuesLocal and
> VecSetValuesLocal. Some entries are filled on different subdomains (of course
> with the same value)
>
> The solution for the first subdomain is correct. however the solution vector
> on the second subdomain does not contain the solution on the common nodal
> points it shares with subdomain 1.
> This is probably a feature but it is not consistent with my program setup.
> The question is:
> how can i tell petsc to return a solution for all the positions i filled with
> VecSetValuesLocal?
>
> Thanks in advance
> Thomas Geenen
> ps I got the impression that the assemble routines also exchange common nodal
> point info. It would be nice if this could be skipped as well.
> Of course some extra communication could solve my problem but I would prefer a
> more elegant solution if one exists
>
>
More information about the petsc-users
mailing list