[mpich-discuss] Using MPI_Comm_split to MPI_COMM_LOCAL

Bill Rankin Bill.Rankin at sas.com
Tue Nov 9 09:32:54 CST 2010


Hi John,

Could you just mask 16 bits out of the IP address (ie. use a pseudo-netmask?) to get a unique color? I could envision a network architecture where that would not work - nodes randomly assigned IP out of an address space larger than 16 bits wide.  But clusters are usually organized much more regularly than that.

You run into a problem at 64k nodes in any case, but until then you are fine. :-) 

-b


> -----Original Message-----
> From: mpich-discuss-bounces at mcs.anl.gov [mailto:mpich-discuss-
> bounces at mcs.anl.gov] On Behalf Of John Bent
> Sent: Monday, November 08, 2010 5:47 PM
> To: mpich-discuss
> Subject: [mpich-discuss] Using MPI_Comm_split to MPI_COMM_LOCAL
> 
> All,
> 
> We'd like to create an MPI Communicator for just the processes on each
> local node (i.e. something like MPI_COMM_LOCAL).  We were doing this
> previously very naively by having everyone send out their hostnames and
> then doing string parsing.  We realize that a much simpler way to do it
> would be to use MPI_Comm_split to split MPI_COMM_WORLD by the IP
> address.  Unfortunately, the IP address is 64 bits and the max "color"
> to pass to MPI_Comm_split is only 2^16.  So we're currently planning on
> splitting iteratively on each 16 bits in the 64 bit IP address.
> 
> Anyone know a better way to achieve MPI_COMM_LOCAL?  Or can
> MPI_Comm_split be enhanced to take a 64 bit color?
> --
> Thanks,
> 
> John
> _______________________________________________
> mpich-discuss mailing list
> mpich-discuss at mcs.anl.gov
> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss



More information about the mpich-discuss mailing list