common nodla points on neighbouring subdomains

Thomas Geenen geenen at gmail.com
Wed May 17 14:46:25 CDT 2006


On Wednesday 17 May 2006 20:43, Barry Smith wrote:
>    Thomas,
>
>      Ok, things are clearer. Are you using the VecCreateGhost() routines?
> If you are not using them you should use them (see manual page
> http://www-unix.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-current/docs/man
>ualpages/Vec/VecCreateGhost.html) Once the solver is done you will call
>         VecGhostUpdateBegin(v,INSERT_VALUES,SCATTER_FORWARD);
>         VecGhostUpdateEnd(v,INSERT_VALUES,SCATTER_FORWARD);
> and use VecGhostGetLocalRepresentation() to access the vector plus its
> ghost points. (note that the "local" and "global" representations
> share the same data in the same memory, the difference is that the "local"
> is a VecSeq (with ghost points at the end) and the "global" is a VecMPI
> that is the one actually passed to the solver and used with VecSetValues().
>
that sounds exactly as what I need!! 
I am starting to like petsc more and more :)

thanks for the help
Thomas

>    Good luck,
>
>     Barry
>
> On Wed, 17 May 2006, Thomas Geenen wrote:
> > On Wednesday 17 May 2006 14:53, Barry Smith wrote:
> >>    Thomas,
> >>
> >>      I think you might misunderstand the function of
> >> XXXSetValuesLocal(). What happens with them is the index is mapped to
> >> the global index and the value is then put into the "correct" global
> >> location, so some values will be shipped off to the "owning" process.
> >
> > this happens in the assemble phase right?
> >
> >> 1) If two processes that
> >> "share" a vertex both generate the entire stiffness for that vertex
> >> (that is, each process loops over all the elements containing the
> >> vertex summing the stiffness for that vertex) then the result will
> >> be exactly twice the expected value (since one process will send the
> >> result to the "owning" process). I THINK this is what you are doing.
> >
> > I intercept the matrix and rhs just before the "old" call to the solver
> > routine just before the call the value for the rhs of common nodalpoints
> > are calculated globally and broadcasted to the owning processes. In this
> > setup more than one process can own a nodal point. I construct a
> > local2global mapping and apply it to the matrix and vector
> >
> > ISLocalToGlobalMapping ltog;
> > ISLocalToGlobalMappingCreate(PETSC_COMM_WORLD, nc, local2global, &ltog);
> > MatSetLocalToGlobalMapping(A, ltog);
> >
> >> 2) if, on the other hand, each process generates only the part of the
> >> stiffness for the vertex from elements that it "owns" then the global
> >> matrix will be correct, BUT the communication of matrix elements to the
> >> "owning" process is done. If you DO NOT want the communication to the
> >> owning process but want to keep part of the stiffness on each process
> >> (for a shared vertex) I would call this the unassembled form. This is
> >> done by some people but is not really supported by PETSc.
> >>
> >> I think you may want to consider NOT having "shared" vertices (that is
> >> either 1 if you are using that or 2). This may not require much change
> >> to what you do not want to change but will allow you to easily use all
> >> of the PETSc solvers. The problem with "unassembled" approaches is that
> >> they save a bit of communication in generating the stiffness but then
> >> (since the matrix is never assembled) cannot form much of anything in
> >> terms of preconditioners, thus you do lots of linear solver iterations
> >> with communication. Communicating the stiffness initially (if you have a
> >> good partitioning of elements) is really not much communication and IMHO
> >> is the wrong place to optimize.
> >
> > basically what happens in my code is that each process owns all
> > nodalpoints of the element it owns. I tell petsc the global numbering of
> > the matrix and vector entries so I can use all the preconditioners and
> > solvers etc. However in the original setup the program expects on return
> > to have the solution vector for all nodes of the elements it owns . Petsc
> > gives the solution to the "owner process" of the nodal point. To correct
> > this I would have to send parts of the solution to processes that don't
> > "own" that part of the solution but do expect it to be there. I thought
> > It would be elegant if Petsc could return a solution vector that
> > corresponds to the local filling of the vector not to the "owners" of the
> > nodal point.
> >
> > thanks for you fast response
> >
> > Thomas Geenen
> >
> >>    I hope my response is not too confusing, I may have misunderstood
> >> parts of your question.
> >>
> >>     Barry
> >>
> >> On Wed, 17 May 2006, Thomas Geenen wrote:
> >>> Dear Petsc users,
> >>>
> >>> I am solving a system of equations Ax=b resulting from a Finite element
> >>> package.  The parallelization strategy for this program is to minimize
> >>> communication. therefore each subdomain has its own copy of common
> >>> nodal points between subdomains (local numbering). This strategy is
> >>> already implemented and I don't want to change that
> >>>
> >>> I fill the matrix and right hand side with a call to MatSetValuesLocal
> >>> and VecSetValuesLocal. Some entries are filled on different subdomains
> >>> (of course with the same value)
> >>>
> >>> The solution for the first subdomain is correct. however the solution
> >>> vector on the second subdomain does not contain the solution on the
> >>> common nodal points it shares with subdomain 1.
> >>> This is probably a feature but it is not consistent with my program
> >>> setup. The question is:
> >>> how can i tell petsc to return a solution for all the positions i
> >>> filled with VecSetValuesLocal?
> >>>
> >>> Thanks in advance
> >>> Thomas Geenen
> >>> ps I got the impression that the assemble routines also exchange common
> >>> nodal point info.  It would be nice if this could be skipped as well.
> >>> Of course some extra communication could solve my problem but I would
> >>> prefer a more elegant solution if one exists




More information about the petsc-users mailing list