[MPICH] MPI_Dimms_create() and square numbers

Andrea Di Blas andrea at soe.ucsc.edu
Mon Jul 16 17:59:04 CDT 2007


hi bill,


thank you very much for your reply. it did seem strange...
I'd be happy to know what you guys decide.

regards,

	andrea



-- 
Andrea  Di Blas                     UCSC
                   School of Engineering
Tel: (831) 459 4193  Fax: (831) 459 4829

On Sun, 15 Jul 2007, William Gropp wrote:

> Thanks for the example.  This isn't the intended default behavior;
> for systems with a particular underlying topology, there is a
> mechanism for that system to override the default Dims_create (e.g.,
> on a system with a mesh interconnect that might be a power of 2 in
> each direction).  We'll have a look at improving the default case.
>
> Bill
>
> On Jul 14, 2007, at 6:37 PM, Andrea Di Blas wrote:
>
> > hello,
> >
> >
> > I am wondering about the strategy implemented by MPI_Dimms_create() to
> > best balance the grid size in all dimensions.
> >
> >
> > for instance, when sizing a bidimensional cartesian communicator
> > with a
> > square number p of processes, without enforcing constraints,
> > sometimes the
> > two sizes are the sqrt(p) as (I would have) expected (e.g. with p =
> > 4 or p
> > = 16), while other times they are not (e.g. with p = 9 or p = 100).
> > the same behavior happenes with cubes.
> >
> > I wonder if I am doing something wrong, or if it is intentional,
> > and if it
> > is so, why.  thanks in advance.
> >
> >
> > regards,
> >
> >
> > 	andrea
> >
> >
> > /*---------------------------------------------------------*/
> > CODE: (for the cube case)
> >
> >
> > for(i = 2; i < 12; ++i)
> > {	grid_size[0] = grid_size[1] = 0; grid_size[2] = 0;
> > 	MPI_Dims_create(i*i*i, 3, grid_size);
> > 	printf("\nMPI_Dims_create(%4d, 3, grid_size) = %3d, %3d, %3d", i*i*i,
> > 			grid_size[0], grid_size[1], grid_size[2]);
> > }
> >
> >
> >
> > OUTPUT: (for the square case)
> >
> > MPI_Dims_create(  4, 2, grid_size) =  2, 2
> > MPI_Dims_create(  9, 2, grid_size) =  9, 1
> > MPI_Dims_create( 16, 2, grid_size) =  4, 4
> > MPI_Dims_create( 25, 2, grid_size) = 25, 1
> > MPI_Dims_create( 36, 2, grid_size) = 18, 2
> > MPI_Dims_create( 49, 2, grid_size) = 49, 1
> > MPI_Dims_create( 64, 2, grid_size) =  8, 8
> > MPI_Dims_create( 81, 2, grid_size) =  9, 9
> > MPI_Dims_create(100, 2, grid_size) = 50, 2
> > MPI_Dims_create(121, 2, grid_size) = 121, 1
> >
> >
> > OUTPUT: (for the cube case)
> >
> > MPI_Dims_create(   8, 3, grid_size) =   4,   2,   1
> > MPI_Dims_create(  27, 3, grid_size) =   9,   3,   1
> > MPI_Dims_create(  64, 3, grid_size) =   4,   4,   4
> > MPI_Dims_create( 125, 3, grid_size) = 125,   1,   1
> > MPI_Dims_create( 216, 3, grid_size) = 108,   2,   1
> > MPI_Dims_create( 343, 3, grid_size) = 343,   1,   1
> > MPI_Dims_create( 512, 3, grid_size) =   8,   8,   8
> > MPI_Dims_create( 729, 3, grid_size) =   9,   9,   9
> > MPI_Dims_create(1000, 3, grid_size) = 500,   2,   1
> > MPI_Dims_create(1331, 3, grid_size) = 1331,   1,   1
> >
> >
> >
> > --
> > Andrea  Di Blas                     UCSC
> >                    School of Engineering
> > Tel: (831) 459 4193  Fax: (831) 459 4829
> >
>




More information about the mpich-discuss mailing list