[petsc-users] distributed arrays

Barry Smith bsmith at mcs.anl.gov
Sat May 1 18:50:32 CDT 2010


On May 1, 2010, at 6:25 PM, Jeff Brown wrote:

> I'm new to PetSc, but have been following your mailing list for  
> several months.
> I'm using fortran95 if it changes the answer, but from reading the  
> manual it looks like only syntax changes.
>
>
> I'm interested in using distributed arrays in my work, but I need to  
> be able to specify 2D grids that are not simply NX by NY.
> I'm using a finite difference formulation and I would like to be  
> able to use a completely arbitrary number of processors,
> like NP = 17.
>
> For NP=17 on a square underlying finite difference grid (for the  
> following illustration, the side of a sqaure=1),
> the optimum configuration (minimizing boundary surface to interior  
> volume) is the following:
>
> [ 3 x 3 grid, each individual square is 1/3 x 3/17 on a side],
> followed by a
> [2x4 grid where each individual square is 1/2 x 2/17 on a side]
>
> I've illustrated this below, with A and B being the 1/3 x 3/17 grids  
> and C and D being the 1/2 by 2/17 cells
> AAA BBB AAA CC DD CC DD
> AAA BBB AAA CC DD CC DD
> BBB AAA BBB CC DD CC DD
> BBB AAA BBB DD CC DD CC
> AAA BBB AAA DD CC DD CC
> AAA BBB AAA DD CC DD CC
>
> Is there a way to specify this distribution of grid cells using  
> distributed arrays?
>
>
    No, the PETSc DA can only handle tensor product grids.
> -------------------------
>
> At different stages in computing with the distributed arrays, I  
> would like to be able to group sets of ghost values together for  
> communication.
> These will be different groups at different stages of the computation.
>
> Let me illustrate with an example.
> Step 1: Calculate A[east], B[east], C[east], D[east] on the ghost  
> interface,
>       each of A, B, C, D is a distributed array of doubles
>       [east] means calculating the ghost interface to the east of  
> each cell
> Step 2: Transmit (A,B,C,D)[east] together.
>      Let's pick the stuff on processor 1 for reference.
>      If I were implementing this in raw MPI, I would create a  
> combined buffer = {A1east, B1east, C1east, D1east} and then send the  
> total buffer
> Step 3: Overlapping calculations of A[west], B[west], C[west], D[west]
> Step 4: Transmit (A,B,C,D)[west] together
> Step 5: Overlapping calculations of A[north], B[north], C[north],  
> D[north]
> Step 6: Transmit (A,B,C,D)[north]
> Step 7: Overlapping calculations of A[south], B[south], C[south],  
> D[south]
> Step 8: Transmit (A,B,C,D)[south]
> Step 9: Overlapping calcluations of A[center], B[center], C[center],  
> D[center]
> Step 10: Wait on east, west, north, south
>
> At different stages, A, B, C, D   might be A, B, C, D, E, F  (where  
> ABCD are the same as above).
>
> Is there a good way to do this?

    The PETSc DA provides a simple interface for ghost points for  
tensor product grids. It doesn't have this level of control of when  
communication takes place.

    Of course, you can write code to manage the decomposition and the  
ghost point updates yourself directly using MPI (or some package that  
does this) and still "wrap" the arrays as PETSc Vecs and so still use  
the various PETSc solvers (but not using the DAs).


     Barry

> Suggestions?
>
> --Jeff Brown
>
>
>
>



More information about the petsc-users mailing list