distributed array
Berend van Wachem
berend at chalmers.se
Fri Jul 21 02:43:12 CDT 2006
Dear Mat,
> One other thing,
>
> in your function VecOverGlobalToBlockGlobalBegin and
> VecOverGlobalToBlockGlobalEnd:
>
> i assume that Vec *A is a the pointer to the blockglobal Vector?
> further Vec **B is an array of pointers to MPI vectors where each element of
> the array is a MPI vector associated with one subblock?
A is the pointer to the single vector over the complete problem. This is
the vector that will be used for the matrix computation. Indeed, *B is
the pointer to an array of vectors which each correspond with a block.
> if that is so, then this is what i believe your functions are doing (please
> correct me if i am wrong):
>
> VecOverGlobalToBlockGlobalBegin: splits the blockglobal vector A into its MPI
> subblock vectors
>
> VecOverGlobalToBlockGlobalEnd: restores the vectors
>
> And in between these two function calls you can mess with the MPI subblock
> vectors?
Exactly.
>
> then you iterate over all blocks (i assume this is the glueing part? )
I am not sure what you precisely mean. For glueing the blocks, I create
an array of IS's which I scatter over. In my problem it's even more
complicated than just that, because the I-direction in one block can be
another direction in a neighbouring block which makes the
neighbour-seeking a little more difficult. Once the IS are created, a
scatter makes the neighbour values available to the current block. I use
the same scatter once in the beginning to find out the addresses of the
matrix locations.
Good luck,
Berend.
More information about the petsc-users
mailing list