Hi,<br><br>I'm working on a 2D Cartesian grid and I'm going to decompose the grid for MPI for my CFD Fortran code. The grid size is in the ratio of 110 x 70. I wonder how I should decompose the grid - horizontally or vertically?<br>
<br>For e.g., for 2 processors, to 2 55x70 grids, or 2 110x35 grids.<br><br>I thought that communication between grids will be less if I do 55x70 because communication will only involve 70 values. However, if it's 110x35, it'll involve 110 values<br>
<br>Hence does it matter in PETSc how the decomposition is done?<br><br>On the other hand, since Fortran is column major and hence I do the calculation:<br><br>do j=1,size_y<br><br>do i=1,size_x<br><br>f(i,j)=....<br><br>
end do<br><br>end do<br><br>I'll need to "package" the 70 values in a chunk for efficient sending. However, if it's in 110x35, I can use mpi_isend directly since it's contagious data.<br><br>So is there a better option since there seems to be a conflict? I read about the use of DMMG. Will this problem be dealt with much better if I use DMMG instead?<br>
<br>Thank you very much<br>