I'm new to PetSc, but have been following your mailing list for several months.<br>I'm using fortran95 if it changes the answer, but from reading the manual it looks like only syntax changes.<br><br><br>I'm interested in using distributed arrays in my work, but I need to be able to specify 2D grids that are not simply NX by NY.<br>
I'm using a finite difference formulation and I would like to be able to use a completely arbitrary number of processors, <br>like NP = 17. <br><br>For NP=17 on a square underlying finite difference grid (for the following illustration, the side of a sqaure=1),<br>
the optimum configuration (minimizing boundary surface to interior volume) is the following:<br><br>[ 3 x 3 grid, each individual square is 1/3 x 3/17 on a side], <br>followed by a <br>[2x4 grid where each individual square is 1/2 x 2/17 on a side]<br>
<br>I've illustrated this below, with A and B being the 1/3 x 3/17 grids and C and D being the 1/2 by 2/17 cells<br>AAA BBB AAA CC DD CC DD<br>AAA BBB AAA CC DD CC DD<br>BBB AAA BBB CC DD CC DD<br>BBB AAA BBB DD CC DD CC <br>
AAA BBB AAA DD CC DD CC <br>AAA BBB AAA DD CC DD CC <br><br>Is there a way to specify this distribution of grid cells using distributed arrays?<br><br><br>-------------------------<br><br>At different stages in computing with the distributed arrays, I would like to be able to group sets of ghost values together for communication.<br>
These will be different groups at different stages of the computation.<br><br>Let me illustrate with an example.<br>Step 1: Calculate A[east], B[east], C[east], D[east] on the ghost interface, <br> each of A, B, C, D is a distributed array of doubles<br>
[east] means calculating the ghost interface to the east of each cell<br>Step 2: Transmit (A,B,C,D)[east] together. <br> Let's pick the stuff on processor 1 for reference.<br> If I were implementing this in raw MPI, I would create a combined buffer = {A1east, B1east, C1east, D1east} and then send the total buffer<br>
Step 3: Overlapping calculations of A[west], B[west], C[west], D[west]<br>Step 4: Transmit (A,B,C,D)[west] together<br>Step 5: Overlapping calculations of A[north], B[north], C[north], D[north]<br>Step 6: Transmit (A,B,C,D)[north]<br>
Step 7: Overlapping calculations of A[south], B[south], C[south], D[south]<br>Step 8: Transmit (A,B,C,D)[south]<br>Step 9: Overlapping calcluations of A[center], B[center], C[center], D[center]<br>Step 10: Wait on east, west, north, south<br>
<br>At different stages, A, B, C, D might be A, B, C, D, E, F (where ABCD are the same as above).<br><br>Is there a good way to do this?<br>Suggestions?<br><br>--Jeff Brown<br><br><br><br><br>