<div>thank you very much!</div>
<div>yes ,you are right! i want the processes to communicate that way. </div>
<div>i really appreciate your help,i'll give it a shot anyhow,i recently read </div>
<div>something about mpich/madeleine,i hope that helps.</div>
<div>thank you very much again!</div>
<div> --best wishes</div>
<div> --Yaohui Hu<br><br></div>
<div class="gmail_quote">2009/11/15 Darius Buntinas <span dir="ltr"><<a href="mailto:buntinas@mcs.anl.gov">buntinas@mcs.anl.gov</a>></span><br>
<blockquote class="gmail_quote" style="PADDING-LEFT: 1ex; MARGIN: 0px 0px 0px 0.8ex; BORDER-LEFT: #ccc 1px solid"><br>Do you mean use sockets to communicate between processes 1 and<br>2, but use IB to communicate between processes 1 and 3?<br>
<br>We intend to support this, but all the pieces are not there yet.<br>There are function pointer tables for each netmod.<br><br>If you take a look at MPID_nem_vc_init(), you'll see where<br>MPID_nem_netmod_func->vc_init(vc) is called to set the function<br>
pointers for the various communication operations used when<br>sending a message over a particular network module.<br><br>Currently MPID_nem_netmod_func is set globally, so that all<br>internode vcs use the same netmod. In order to use different<br>
netmods for different vcs, you'll have call the vc_init() of<br>the netmod you want to use on that vc.<br><br>You'll see other places where MPID_nem_netmod_func is referenced<br>and you'll have to fix those so either they call the function<br>
for the netmod associated with that vc, or (e.g., in the case of<br>poll or finalize) you'll need to call the function for all active<br>netmods.<br><br>-d<br><br>On 11/14/2009 01:07 AM, hu yaohui wrote:<br>> Thank you very much ,i am just wondering if i implement two internode<br>
> communication channel,can i control some kinds of messge goes through<br>> one channel while other messge goes through goes through other<br>> channel(not one kind of message goes through different channels) thank<br>
> you very much!<br>><br>> On Sat, Nov 14, 2009 at 2:32 AM, Rajeev Thakur <<a href="mailto:thakur@mcs.anl.gov">thakur@mcs.anl.gov</a><br>> <mailto:<a href="mailto:thakur@mcs.anl.gov">thakur@mcs.anl.gov</a>>> wrote:<br>
><br>> If you use the default Nemesis channel (ch3:nemesis), communication<br>> within a node goes over shared memory and across nodes over some<br>> communication channel.<br>><br>> Rajeev<br>
><br>> ------------------------------------------------------------------------<br>> *From:* <a href="mailto:mpich-discuss-bounces@mcs.anl.gov">mpich-discuss-bounces@mcs.anl.gov</a><br>> <mailto:<a href="mailto:mpich-discuss-bounces@mcs.anl.gov">mpich-discuss-bounces@mcs.anl.gov</a>><br>
> [mailto:<a href="mailto:mpich-discuss-bounces@mcs.anl.gov">mpich-discuss-bounces@mcs.anl.gov</a><br>> <mailto:<a href="mailto:mpich-discuss-bounces@mcs.anl.gov">mpich-discuss-bounces@mcs.anl.gov</a>>] *On Behalf Of *hu yaohui<br>
> *Sent:* Friday, November 13, 2009 3:13 AM<br>> *To:* <a href="mailto:mpich-discuss@mcs.anl.gov">mpich-discuss@mcs.anl.gov</a> <mailto:<a href="mailto:mpich-discuss@mcs.anl.gov">mpich-discuss@mcs.anl.gov</a>><br>
> *Subject:* Re: [mpich-discuss]答复: how to connect MPI_Send with<br>> CH3_isend<br>><br>> thank you very much!<br>> I would really appreciate,if you can give me some detail<br>
> steps from the implemention of these functions to deploy the<br>> mpich2 on my clusters! (such as how to modify the configure file<br>> sth like that.)<br>> i did a test yesterday ,i install the mpich2 three times<br>
> ,and then check the make<br>> output,chosse ./configure --with-device=ch3:sock,--with-device=ch3:sctp<br>> and default,<br>> does these device means the internode communication channel? and<br>
> the default intranode communication channel is shared-memory and<br>> cann't be modified?<br>><br>> thank you very much!<br>><br>> best wishes!<br>> 2009/11/13 Darius Buntinas <<a href="mailto:buntinas@mcs.anl.gov">buntinas@mcs.anl.gov</a><br>
> <mailto:<a href="mailto:buntinas@mcs.anl.gov">buntinas@mcs.anl.gov</a>>><br>><br>><br>> The ability to use different networks for different<br>> destinations is a feature we intend to add, but it's not<br>
> yet implemented. Nemesis does use shared-memory communication<br>> for intranode communication and the network for internode.<br>><br>> There are several internal interfaces for porting MPICH2 to new<br>
> platforms or interconnects. This paper has a description of the<br>> interfaces:<br>> <a href="http://www.mcs.anl.gov/~buntinas/papers/europvmmpi06-nemesis.pdf" target="_blank">http://www.mcs.anl.gov/~buntinas/papers/europvmmpi06-nemesis.pdf</a><br>
><br>> If you're interested in porting MPICH2 to a new interconnect,<br>> you should write a nemesis network module. The API is described<br>> here:<br>> <a href="http://wiki.mcs.anl.gov/mpich2/index.php/Nemesis_Network_Module_API" target="_blank">http://wiki.mcs.anl.gov/mpich2/index.php/Nemesis_Network_Module_API</a><br>
><br>> I hope this helps,<br>><br>> -d<br>><br>><br>><br>><br>><br>><br>> On 11/12/2009 11:29 PM, yaohui wrote:<br>> > Thank you very much!<br>
> > Finally I got a response!<br>> > I am doing some researches on adding HT(HyperTransport)<br>> into mpich2,I am<br>> > just wandering when we configure the mpich2 with<br>
> > ./configure –with-device=ch3:sock<br>> (ch3:nemesis)<br>> > that means we definitely select one network as the channel<br>> to communicate,<br>
> > but if there are any possibilities that you can dynamic<br>> > control the network we use ,sometimes messages go through<br>> Ethernet,<br>> > sometimes go through IB. could you give me some materials<br>
> > to understand what's the difference between ~/ch3/channel/sock<br>> > ,~/ch3/channel/sctp<br>> ~,~/ch3/channel/nemesis/nemesis/net_mod/tcp_module<br>> > , ~/ch3/channel/nemesis/nemesis/net_mod/sctp_module,<br>
> > ~/ch3/channel/nemesis/nemesis/net_mod/ib_module<br>> ><br>> > Thank you again!<br>> ><br>> > Best Wishes!<br>> > -----邮件原件-----<br>
> > 发件人: <a href="mailto:mpich-discuss-bounces@mcs.anl.gov">mpich-discuss-bounces@mcs.anl.gov</a><br>> <mailto:<a href="mailto:mpich-discuss-bounces@mcs.anl.gov">mpich-discuss-bounces@mcs.anl.gov</a>><br>
> [mailto:<a href="mailto:mpich-discuss-bounces@mcs">mpich-discuss-bounces@mcs</a><br>> <mailto:<a href="mailto:mpich-discuss-bounces@mcs">mpich-discuss-bounces@mcs</a>>.<br>> > <a href="http://anl.gov/" target="_blank">anl.gov</a> <<a href="http://anl.gov/" target="_blank">http://anl.gov/</a>>] 代表 Darius Buntinas<br>
> > 发送时间: 2009年11月12日 23:30<br>> > 收件人: <a href="mailto:mpich-discuss@mcs.anl.gov">mpich-discuss@mcs.anl.gov</a><br>> <mailto:<a href="mailto:mpich-discuss@mcs.anl.gov">mpich-discuss@mcs.anl.gov</a>><br>
> > 主题: Re: [mpich-discuss] how to connect MPI_Send with<br>> CH3_isend<br>> ><br>> > The best way to understand what happens is to step through<br>> the calls<br>
> > with a debugger. The channel is selected in the configure<br>> step, and<br>> > only that channel will be compiled, so there will be only<br>> one copy of<br>
> > ch3_isend in the binary.<br>> ><br>> > -d<br>> ><br>> > On 11/12/2009 05:54 AM, hu yaohui wrote:<br>> >> hi guys!<br>
> >> Could someone tell me the detail calling flow from<br>> MPI_Send() to<br>> >> CH3_isend()?<br>> >> How to control which CH3_isend() is called between<br>
> sock,sctp and<br>> >> shm? thank you very much!<br>> >><br>> > best<br>> >> wishes!<br>> >><br>> ><br>
> >> loki!<br>> >><br>> >><br>> >><br>> >><br>> ------------------------------------------------------------------------<br>
> >><br>> >> _______________________________________________<br>> >> mpich-discuss mailing list<br>> >> <a href="mailto:mpich-discuss@mcs.anl.gov">mpich-discuss@mcs.anl.gov</a> <mailto:<a href="mailto:mpich-discuss@mcs.anl.gov">mpich-discuss@mcs.anl.gov</a>><br>
> >> <a href="https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss" target="_blank">https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss</a><br>> > _______________________________________________<br>
> > mpich-discuss mailing list<br>> > <a href="mailto:mpich-discuss@mcs.anl.gov">mpich-discuss@mcs.anl.gov</a> <mailto:<a href="mailto:mpich-discuss@mcs.anl.gov">mpich-discuss@mcs.anl.gov</a>><br>
> > <a href="https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss" target="_blank">https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss</a><br>> ><br>> ><br>> ><br>
> ><br>> ------------------------------------------------------------------------<br>> ><br>> > _______________________________________________<br>> > mpich-discuss mailing list<br>
> > <a href="mailto:mpich-discuss@mcs.anl.gov">mpich-discuss@mcs.anl.gov</a> <mailto:<a href="mailto:mpich-discuss@mcs.anl.gov">mpich-discuss@mcs.anl.gov</a>><br>> > <a href="https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss" target="_blank">https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss</a><br>
><br>> _______________________________________________<br>> mpich-discuss mailing list<br>> <a href="mailto:mpich-discuss@mcs.anl.gov">mpich-discuss@mcs.anl.gov</a> <mailto:<a href="mailto:mpich-discuss@mcs.anl.gov">mpich-discuss@mcs.anl.gov</a>><br>
> <a href="https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss" target="_blank">https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss</a><br>><br>><br>><br>> _______________________________________________<br>
> mpich-discuss mailing list<br>> <a href="mailto:mpich-discuss@mcs.anl.gov">mpich-discuss@mcs.anl.gov</a> <mailto:<a href="mailto:mpich-discuss@mcs.anl.gov">mpich-discuss@mcs.anl.gov</a>><br>> <a href="https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss" target="_blank">https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss</a><br>
><br>><br>><br>> ------------------------------------------------------------------------<br>><br>> _______________________________________________<br>> mpich-discuss mailing list<br>> <a href="mailto:mpich-discuss@mcs.anl.gov">mpich-discuss@mcs.anl.gov</a><br>
> <a href="https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss" target="_blank">https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss</a><br><br>_______________________________________________<br>mpich-discuss mailing list<br>
<a href="mailto:mpich-discuss@mcs.anl.gov">mpich-discuss@mcs.anl.gov</a><br><a href="https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss" target="_blank">https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss</a><br>
<br></blockquote></div><br>