[mpich-discuss] Using MPI_Comm_split to MPI_COMM_LOCAL
John Bent
johnbent at lanl.gov
Thu Nov 11 12:14:08 CST 2010
Excerpts from Dave Goodell's message of Wed Nov 10 13:51:32 -0700 2010:
> But your iterative solution sounds like a very pragmatic approach to me.
>
Sam Gutierrez just came up with an even more elegant approach I think:
0) allgather and then sort an array of network addresses
1) walk down array counting unique addresses until you find your own
2) your color is the count of unique addresses when you find your own
3) MPI_Comm_split(color)
[Our previous approach did one allreduce to determine whether IPv4 or
IPv6 and then iterated on MPI_Comm_split an appropriate number of
times.]
Thanks all for the help figuring out this stop-gap solution. We'll
continue to push on MPI developers to expose MPI_COMM_LOCAL and
MPI_COMM_ONE_PER_NODE in the same way they currently do with
MPI_COMM_WORLD.
--
Thanks,
John
More information about the mpich-discuss
mailing list