[mpich-discuss] Can ch3:nemesis:mx use both MX and TCP networks ?

Dave Goodell goodell at mcs.anl.gov
Tue Oct 13 13:18:02 CDT 2009


On Oct 13, 2009, at 12:10 PM, Scott Atchley wrote:

> On Oct 13, 2009, at 1:05 PM, Guillaume Mercier wrote:
>
>>> I always thought (but never tested) that the ch3:nemesis:mx  
>>> cmehannel has the ability to independently select either MX or TCP  
>>> to communicate between any pair of MPI processes when shared  
>>> memory communication isn’t possible (e.g. the two process run on  
>>> different nodes).
>>
>> When Nemesis:Mx is selected, only MX can be used.
>
> Not only it is a runtime selection, but it is a compile time  
> selection.
>
> You could compile your application against a shared MPICH2 Nemesis/ 
> TCP lib. You could then run that binary and use LD_LIBRARY_PATH to  
> select either a Nem/TCP or a Nem/MX libmpich.so.

That would work, but it's a bit heavyweight.  I believe that switching  
between TCP or MX at runtime is currently possible via an environment  
variable as documented here: http://wiki.mcs.anl.gov/mpich2/index.php/Nemesis_Runtime_Netmod_Selection

Skimming the doc it looks like some of the names are a bit stale (s/ 
newtcp,gm/tcp,mx/g), but the basic idea should still hold and the  
environment variable definitely still has the same name.  I'll take a  
quick pass over that doc soon.

>>> When shared memory communication isn’t possible, does the  
>>> ch3:nemesis:mx channel has the ability to independently select  
>>> either MX or TCP to communicate between two MPI processes ?
>>>
>> The Mx module does not offer this kind of possibility. Could you  
>> explain to me when shared memory is not possible within
>> a node ? I don't see the case.
>
> I believe he is asking for selection of Nem/TCP or Nem/MX for non- 
> shmem traffic.

Selecting the communication method on a per-process-pair basis is not  
yet implemented but IIRC it's something we would like to support  
eventually.

Darius is on vacation today, but he should be back tomorrow.  He will  
know the status of these various features better than anyone else.

-Dave



More information about the mpich-discuss mailing list