<div>thank you very much!</div>
<div> I would really appreciate,if you can give me some detail steps from the implemention of these functions to deploy the mpich2 on my clusters! (such as how to modify the configure file sth like that.)</div>
<div> i did a test yesterday ,i install the mpich2 three times ,and then check the make output,chosse ./configure --with-device=ch3:sock,--with-device=ch3:sctp and default,</div>
<div>does these device means the internode communication channel? and the default intranode communication channel is shared-memory and cann't be modified?</div>
<div> thank you very much!</div>
<div> best wishes!<br></div>
<div class="gmail_quote">2009/11/13 Darius Buntinas <span dir="ltr"><<a href="mailto:buntinas@mcs.anl.gov">buntinas@mcs.anl.gov</a>></span><br>
<blockquote class="gmail_quote" style="PADDING-LEFT: 1ex; MARGIN: 0px 0px 0px 0.8ex; BORDER-LEFT: #ccc 1px solid"><br>The ability to use different networks for different<br>destinations is a feature we intend to add, but it's not<br>
yet implemented. Nemesis does use shared-memory communication<br>for intranode communication and the network for internode.<br><br>There are several internal interfaces for porting MPICH2 to new<br>platforms or interconnects. This paper has a description of the<br>
interfaces:<br><a href="http://www.mcs.anl.gov/~buntinas/papers/europvmmpi06-nemesis.pdf" target="_blank">http://www.mcs.anl.gov/~buntinas/papers/europvmmpi06-nemesis.pdf</a><br><br>If you're interested in porting MPICH2 to a new interconnect,<br>
you should write a nemesis network module. The API is described<br>here:<br><a href="http://wiki.mcs.anl.gov/mpich2/index.php/Nemesis_Network_Module_API" target="_blank">http://wiki.mcs.anl.gov/mpich2/index.php/Nemesis_Network_Module_API</a><br>
<br>I hope this helps,<br><font color="#888888"><br>-d<br></font>
<div>
<div></div>
<div class="h5"><br><br><br><br><br><br>On 11/12/2009 11:29 PM, yaohui wrote:<br>> Thank you very much!<br>> Finally I got a response!<br>> I am doing some researches on adding HT(HyperTransport) into mpich2,I am<br>
> just wandering when we configure the mpich2 with<br>> ./configure –with-device=ch3:sock (ch3:nemesis)<br>> that means we definitely select one network as the channel to communicate,<br>> but if there are any possibilities that you can dynamic<br>
> control the network we use ,sometimes messages go through Ethernet,<br>> sometimes go through IB. could you give me some materials<br>> to understand what's the difference between ~/ch3/channel/sock<br>> ,~/ch3/channel/sctp ~,~/ch3/channel/nemesis/nemesis/net_mod/tcp_module<br>
> , ~/ch3/channel/nemesis/nemesis/net_mod/sctp_module,<br>> ~/ch3/channel/nemesis/nemesis/net_mod/ib_module<br>><br>> Thank you again!<br>><br>> Best Wishes!<br>> -----邮件原件-----<br>> 发件人: <a href="mailto:mpich-discuss-bounces@mcs.anl.gov">mpich-discuss-bounces@mcs.anl.gov</a> [mailto:<a href="mailto:mpich-discuss-bounces@mcs">mpich-discuss-bounces@mcs</a>.<br>
> <a href="http://anl.gov/" target="_blank">anl.gov</a>] 代表 Darius Buntinas<br>> 发送时间: 2009年11月12日 23:30<br>> 收件人: <a href="mailto:mpich-discuss@mcs.anl.gov">mpich-discuss@mcs.anl.gov</a><br>> 主题: Re: [mpich-discuss] how to connect MPI_Send with CH3_isend<br>
><br>> The best way to understand what happens is to step through the calls<br>> with a debugger. The channel is selected in the configure step, and<br>> only that channel will be compiled, so there will be only one copy of<br>
> ch3_isend in the binary.<br>><br>> -d<br>><br>> On 11/12/2009 05:54 AM, hu yaohui wrote:<br>>> hi guys!<br>>> Could someone tell me the detail calling flow from MPI_Send() to<br>>> CH3_isend()?<br>
>> How to control which CH3_isend() is called between sock,sctp and<br>>> shm? thank you very much!<br>>><br>> best<br>>> wishes!<br>>><br>><br>>> loki!<br>>><br>>><br>
>><br>>> ------------------------------------------------------------------------<br>>><br>>> _______________________________________________<br>>> mpich-discuss mailing list<br>>> <a href="mailto:mpich-discuss@mcs.anl.gov">mpich-discuss@mcs.anl.gov</a><br>
>> <a href="https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss" target="_blank">https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss</a><br>> _______________________________________________<br>> mpich-discuss mailing list<br>
> <a href="mailto:mpich-discuss@mcs.anl.gov">mpich-discuss@mcs.anl.gov</a><br>> <a href="https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss" target="_blank">https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss</a><br>
><br>><br>><br>> ------------------------------------------------------------------------<br>><br>> _______________________________________________<br>> mpich-discuss mailing list<br>> <a href="mailto:mpich-discuss@mcs.anl.gov">mpich-discuss@mcs.anl.gov</a><br>
> <a href="https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss" target="_blank">https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss</a><br></div></div><br>_______________________________________________<br>mpich-discuss mailing list<br>
<a href="mailto:mpich-discuss@mcs.anl.gov">mpich-discuss@mcs.anl.gov</a><br><a href="https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss" target="_blank">https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss</a><br>
<br></blockquote></div><br>