Which way to decompose domain/grid

Jed Brown jed at 59A2.org
Thu Dec 10 03:09:20 CST 2009


On Thu, 10 Dec 2009 16:44:02 +0800, Wee-Beng Tay <zonexo at gmail.com> wrote:
> Hi,
> 
> I'm working on a 2D Cartesian grid and I'm going to decompose the grid for
> MPI for my CFD Fortran code. The grid size is in the ratio of 110 x 70. I
> wonder how I should decompose the grid - horizontally or vertically?

Both

> I'll need to "package" the 70 values in a chunk for efficient sending.
> However, if it's in 110x35, I can use mpi_isend directly since it's
> contagious data.

This will make no performance difference and would make things very
fragile.  The cost of packing the ghosted values is trivial compared to
the cost of sending them (which is mostly due to latency so it doesn't
matter much how many values are sent).

> So is there a better option since there seems to be a conflict? I read about
> the use of DMMG. Will this problem be dealt with much better if I use DMMG
> instead?

DMMG is for multigrid, start with a DA, as in

DACreate2d(PETSC_COMM_WORLD,wrap,stencil_type,110,70,PETSC_DECIDE,PETSC_DECIDE,...)

The user's manual has a good section on this.


Jed


More information about the petsc-users mailing list