Which way to decompose domain/grid

Wee-Beng Tay zonexo at gmail.com
Fri Dec 11 04:24:58 CST 2009


Hi Jed,

Thanks for the answers. However, I'm still abit confused. My question is
below. Also, you mention in the later mail that "Many people have the
perception that the goal of partitioning is to
minimize the number of neighbor processes.  This actually has little impact
on communication costs and is usually detrimental to solver performance.“

But you mention abt latency, so shouldn't minimizing the number of neighbor
processes reduce latency and improve performance?

On Thu, Dec 10, 2009 at 5:09 PM, Jed Brown <jed at 59a2.org> wrote:

> On Thu, 10 Dec 2009 16:44:02 +0800, Wee-Beng Tay <zonexo at gmail.com> wrote:
> > Hi,
> >
> > I'm working on a 2D Cartesian grid and I'm going to decompose the grid
> for
> > MPI for my CFD Fortran code. The grid size is in the ratio of 110 x 70. I
> > wonder how I should decompose the grid - horizontally or vertically?
>
> Both
>

For both do u mean dividing 1 big grid into 4 55x35 grids?

>
> > I'll need to "package" the 70 values in a chunk for efficient sending.
> > However, if it's in 110x35, I can use mpi_isend directly since it's
> > contagious data.
>
> This will make no performance difference and would make things very
> fragile.  The cost of packing the ghosted values is trivial compared to
> the cost of sending them (which is mostly due to latency so it doesn't
> matter much how many values are sent).
>

so whichever method I use  (horizontal or vertical) doesn't matter? But
splitting to 4 55x35 grids will be better?

>
> > So is there a better option since there seems to be a conflict? I read
> about
> > the use of DMMG. Will this problem be dealt with much better if I use
> DMMG
> > instead?
>
> DMMG is for multigrid, start with a DA, as in
>
>
> DACreate2d(PETSC_COMM_WORLD,wrap,stencil_type,110,70,PETSC_DECIDE,PETSC_DECIDE,...)
>
> The user's manual has a good section on this.
>
>
> Jed
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20091211/6aa3de2f/attachment.htm>


More information about the petsc-users mailing list