Creating a new vector based on a already distributed set of Vecs

Vijay S. Mahadevan vijay.m at gmail.com
Sun Mar 23 18:21:15 CDT 2008


Barry,

First, thanks for the detailed answer. That cleared few of my questions
already.

> Based on your question I am guessing you have the first case, that is
> you already
> work with global representation of the individual physics. If this is
> correct then a global representation
> of the multiphysics conceptually is simply a concatenation of the
> global representation of the
> individual physics

Yes. This is my situation currently. And I am making now the transition from
single (case 1: global individual physics and ghosted individual physics
vectors) to multi-physics (case 2: global unified physics, global individual
physics and ghosted individual physics vectors). 

> I highly
> recommend starting with the three distinct levels of representation
> and doing the needed copies;
> your code will be clean and MUCH easier to debug and keep track of.

Fair enough. Thanks for analyzing the possibilities here.

> Once you are running your
> cool multiphysics app on your real problem ,if profiling indicates the
> extra copies are killing you
> you can come back to us and we can help with optimizations.

I will keep that in mind Barry :-)

Again, Matt and Barry, thanks for the help !

> -----Original Message-----
> From: owner-petsc-users at mcs.anl.gov [mailto:owner-petsc-users at mcs.anl.gov]
> On Behalf Of Barry Smith
> Sent: Sunday, March 23, 2008 5:03 PM
> To: petsc-users at mcs.anl.gov
> Subject: Re: Creating a new vector based on a already distributed set of
> Vecs
> 
> 
> On Mar 23, 2008, at 2:59 PM, Vijay S. Mahadevan wrote:
> > Hi all,
> >
> > I am sure the subject line wasn’t too explanatory and so here goes.
> >
> > I have a system which already has a Vec object created and
> > distributed on
> > many processors based on domain decomposition. I could have several
> > such
> > systems, each with a solution vector and a residual vector
> > pertaining to its
> > physics.
> 
>     Is this really an accurate description of the situation? In
> parallel computing
> (and certainly in the PETSc paradigm) for a single physics there are
> really TWO distinct
> vector layouts. The layout that the Newton and linear solver sees
> (without holes in the vectors
> or ghost points in the vectors), what we call the Global vectors and
> then what the
> "physics" on each process sees, which is a part of the global vector
> plus the ghost
> points that are needed to perform the local part of the computation,
> what we
> call the local (or ghosted) vectors. At the beginning of  the function
> evaluations (or matrix-free multiply)
> values are moved from the global representation to the local
> representation,
> local "physics" is computed and then (depending on the discretization,
> for finite differences
> usually no, for finite element usually yes) computed values are moved
> back to the
> process where they belong from ghost locations. In PETSc the movement
> of the ghost values is handled
> with VecScatter or DAGlobalToLocal/DALocalToGlobal.
> 
>    With multiple "physics" (say two) there are possibly three levels:
> the global representation
> of the unified physics, the global representation of the individual
> physics and the
> local representation of the individual physics (note that "individual"
> physics local computations
> actually depend, in general, on other individual physics either as
> given parameters or
> boundary values ).  If you are doing linear or nonlinear solves on the
> individual physics
> then you really need all three representations, if you only have an
> outer Newton (or Krylov)
> solver and compute the physics by computing a bunch of individuals
> physics solves then
> you really don't need the global representation of the individual
> physics, you could
> scatter directly from the global representation of the unified physics
> to the local (ghosted)
> representation of the individual physics.
> 
> Based on your question I am guessing you have the first case, that is
> you already
> work with global representation of the individual physics. If this is
> correct then a global representation
> of the multiphysics conceptually is simply a concatenation of the
> global representation of the
> individual physics: to be more concrete say I have global
> representations of physics one
> which is an MPI vector u where u(part-zero) lives on process 0 while
> u(part-one) lives on process 1.
> Meanwhile physics two has T(part-zero) on the first process and T(part-
> one) lives on process 1.
> (Since these are global representations of u and T there are no ghost
> points inside the u and T).
> Now the global representation of the unified physics would be  P(part-
> zero)  = [u(part-zero) T(part-zero)] on process
> zero and P(part-one) = [u(part-one) T(part-one)].
> 
> Your question is then: if you have a vector u(part-*) and T(part-*)
> can you avoid copying them
> in P(part*) (the global SNES solver works on the global unified
> vector) and instead wrap up the
> two vectors u and T to make them look like a single vector for SNES?
> 
> The answer is yes you can, BUT (as Matt points out) doing this is ONLY
> a tiny optimization (in time and memory
> savings) over simply copying the two parts into P when needed and
> copying them out of P when neccessary).
> In the entire running of your two physics I would be amazed if these
> little copies (which involve no
> parallel communication and will be totally swamped by the parallel
> communication used in updating the
> ghost points) made up more than
> a tiny percentage of the time (if they do then the two "physics" are
> each trivial). I highly
> recommend starting with the three distinct levels of representation
> and doing the needed copies;
> your code will be clean and MUCH easier to debug and keep track of.
> Once you are running your
> cool multiphysics app on your real problem ,if profiling indicates the
> extra copies are killing you
> you can come back to us and we can help with optimizations.
> 
> 
>     Barry
> 
> 
> We are ourselves struggling with approaches to making doing multi-
> physics very straightforward with PETSc;
> any feed back from users is helpful . 
> 
> 
> 
> >
> >
> > Now, can I create a global vector which would just be pointers to the
> > already existing vectors but from PETSc's point of view, appears to
> > be a
> > globally valid, (and possibly weirdly) distributed vector. Such an
> > option
> > will save the memory needed for the global vector and eliminates
> > errors as
> > to synchronization of the solution, residuals for the systems.
> >
> > I was reading the documentation and found the PetscMap data
> > structure. But I
> > am not entirely sure if this is what I am looking for.
> >
> > I hope that makes sense. I would appreciate any help you can provide
> > to
> > point me in the right direction.
> >
> > Cheers,
> > Vijay
> > 

No virus found in this outgoing message.
Checked by AVG. 
Version: 7.5.519 / Virus Database: 269.21.8/1339 - Release Date: 3/22/2008
4:43 PM
 




More information about the petsc-users mailing list