VecView behaviour

Jed Brown jed at 59A2.org
Wed Jun 10 09:38:19 CDT 2009


Andreas Grassl wrote:
> Hi Jed,
> 
> the BNN-Algorithm in the literature distinguishes always between inner nodes and
> interface nodes. The short question arising from your explanation for me is, if
> owned DOF's is a synonym for the inner DOF's and ghosted DOF's for the interface
> DOF's?

No, every degree of freedom (interior and interface) must be owned by
exactly one process.  You want every process to own their interior
degrees of freedom, but I don't think there is a way to guarantee this
without using a process like I described.

> Below you find more extended thoughts and an example.
> 
> Jed Brown schrieb:
>> Andreas Grassl wrote:
>>> Barry Smith schrieb:
>>>>    Hmm, it sounds like the difference between local "ghosted" vectors
>>>> and the global parallel vectors. But I do not understand why any of the
>>>> local vector entries would be zero.
>>>> Doesn't the vector X that is passed into KSP (or SNES) have the global
>>>> entries and uniquely define the solution? Why is viewing that not right?
>>>>
>>> I still don't understand fully the underlying processes of the whole PCNN
>>> solution procedure, but trying around I substituted
>>>
>>> MatCreateIS(commw, ind_length, ind_length, PETSC_DECIDE, PETSC_DECIDE,
>>> gridmapping, &A);
>> This creates a matrix that is bigger than you want, and gives you the
>> dead values at the end (global dofs that are not in the range of the
>> LocalToGlobalMapping.
>>
>> This from the note on MatCreateIS:
>>
>> | m and n are NOT related to the size of the map, they are the size of the part of the vector owned
>> | by that process. m + nghosts (or n + nghosts) is the length of map since map maps all local points 
>> | plus the ghost points to global indices.
>>
>>> by
>>>
>>> MatCreateIS(commw, PETSC_DECIDE, PETSC_DECIDE, actdof, actdof, gridmapping, &A);
>> This creates a matrix of the correct size, but it looks like it could
>> easily end up with the "wrong" dofs owned locally.  What you probably
>> want to do is:
>>
>> 1. Resolve ownership just like with any other DD method.  This
>> partitions your dofs into n owned dofs and ngh ghosted dofs on each
>> process.  The global sum of n is N, the size of the global vectors that
>> the solver will interact with.
> 
> do I understand right, that owned dofs are the inner nodes and the ghosted dofs
> are the interface dofs?

No, a dof is ghosted on processes that reference it, but do not own it.

>> 2. Make an ISLocalToGlobalMapping where all the owned dofs come first,
>> mapping (0..n-1) to (rstart..rstart+n-1), followed by the ghosted dofs
>> (local index n..ngh-1) which map to remote processes.  (rstart is the
>> global index of the first owned dof)
> 
> currently I set up my ISLocalToGlobalMapping by giving the processes all the
> dofs in arbitrary order having the effect, that the interface dofs appear more
> times. Attached I give you a small example with 2 subdomains and 270 DOF's.

I think you're ending up with a lot of interior dofs owned by remote
processes (this is bad).  I'll try to explain for the 24 dof example
below.

>> One way to do this is to use MPI_Scan to find rstart, then number all
>> the owned dofs and scatter the result.  The details will be dependent on
>> how you store your mesh.  (I'm assuming it's unstructured, this step is
>> trivial if you use a DA.)
> 
> Yes, the mesh is unstructured, I read out from the FE-package the partitioning
> at element-basis, loop over all elements to find the belonging DOF's and
> assemble the index vector for the ISLocalToGlobalMapping this way, without
> regarding interface DOF's, thinking this would be done automatically by setting
> up the mapping because by this some global DOF's appear more times.
> 
>> 3. Call MatCreateIS(comm,n,n,PETSC_DECIDE,PETSC_DECIDE,mapping,&A);
>>
> 
> Seeing this function call and interpreting the owned DOF's as the subdomain
> inner DOF's the Matrix A has not the full size?!
> 
> Given a 4x6 grid with 1 DOF per node divided into 4 subdomains I get 9 interface
> DOF's.
> 
> 0  o  o  O  o  5
>          |
> 6  o  o  O  o  o
>          |
> O--O--O--O--O--O
>          |
> o  o  o  O  o  23
> 
> My first approach to create the Matrix would give a Matrix size of 35x35, with
> 11 dead entries at the end of the vector.
> 
> My second approach would give the "correct" Matrix size of 24x24.
> 
> By splitting up in n owned values and some ghosted values I would expect to
> receive a Matrix of size 15x15. Otherwise I don't see how I could partition the
> grid in a consistent way.
> 
> I would really appreciate, if you could show me, how the partition and ownership
> of the DOF's in this little example work out.

I see 4 subdomains with interior dofs

rank 0: 0 1 2 6 7 8
rank 1: 4 5 10 11
rank 2: 18 19 20
rank 3: 22 23

I'll continue to use this "natural ordering" to describe the dofs, but
you don't normally want to use it because it is not compatible with the
decomposition you are actually using.  Suppose we resolve ownership by
assigning it to the lowest rank touching it.  Then the global vector
(seen by the solver) is

rank 0: 0 1 2 3 6 7 8 9 12 13 14 15  (global indices 0:12)
rank 1: 4 5 10 11 16 17              (global indices 12:18)
rank 2: 18 19 20 21                  (global indices 18:22)
rank 3: 22 23                        (global indices 22:24)

Your local-to-global map should be with respect to the global indices in
the ordering compatible with the decomposition.  With respect to the
natural ordering, it is

rank 0: 0 1 2 3 6 7 8 9 12 13 14 15
rank 1: 3 4 5 9 10 11 15 16 17
rank 2: 12 13 14 15 18 19 20 21
rank 3: 15 16 17 21 22 23

Converting this to the ordering compatible with your decomposition, we
have

rank 0: 0 1 2 3 4 5 6 7 8 9 10 11
rank 1: 3 12 13 7 14 15 11 16 17
rank 2: 8 9 10 11 18 19 20 21
rank 3: 11 16 17 21 22 23

When you create the matrix, 'n' is the number of owned dofs on each
process [12,6,4,2] and you want to use this final form of the local to
global mapping.  If you just give the total size (24), the partition
will balance the number of owned dofs, but interior dofs won't end up
being owned on the correct process.  If you use the natural ordering,
it's hopeless to end up with correct interior ownership.  (Note that
assigning ownership to the highest touching rank would have been better
balanced in this case.)

Does this help?

Jed

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 260 bytes
Desc: OpenPGP digital signature
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20090610/a5a7459c/attachment.pgp>


More information about the petsc-users mailing list