[mpich-discuss] ????: how to connect MPI_Send with CH3_isend
hu yaohui
loki2441 at gmail.com
Sun Nov 15 19:11:38 CST 2009
thank you very much!
yes ,you are right! i want the processes to communicate that way.
i really appreciate your help,i'll give it a shot anyhow,i recently read
something about mpich/madeleine,i hope that helps.
thank you very much again!
--best wishes
--Yaohui Hu
2009/11/15 Darius Buntinas <buntinas at mcs.anl.gov>
>
> Do you mean use sockets to communicate between processes 1 and
> 2, but use IB to communicate between processes 1 and 3?
>
> We intend to support this, but all the pieces are not there yet.
> There are function pointer tables for each netmod.
>
> If you take a look at MPID_nem_vc_init(), you'll see where
> MPID_nem_netmod_func->vc_init(vc) is called to set the function
> pointers for the various communication operations used when
> sending a message over a particular network module.
>
> Currently MPID_nem_netmod_func is set globally, so that all
> internode vcs use the same netmod. In order to use different
> netmods for different vcs, you'll have call the vc_init() of
> the netmod you want to use on that vc.
>
> You'll see other places where MPID_nem_netmod_func is referenced
> and you'll have to fix those so either they call the function
> for the netmod associated with that vc, or (e.g., in the case of
> poll or finalize) you'll need to call the function for all active
> netmods.
>
> -d
>
> On 11/14/2009 01:07 AM, hu yaohui wrote:
> > Thank you very much ,i am just wondering if i implement two internode
> > communication channel,can i control some kinds of messge goes through
> > one channel while other messge goes through goes through other
> > channel(not one kind of message goes through different channels) thank
> > you very much!
> >
> > On Sat, Nov 14, 2009 at 2:32 AM, Rajeev Thakur <thakur at mcs.anl.gov
> > <mailto:thakur at mcs.anl.gov>> wrote:
> >
> > If you use the default Nemesis channel (ch3:nemesis), communication
> > within a node goes over shared memory and across nodes over some
> > communication channel.
> >
> > Rajeev
> >
> >
> ------------------------------------------------------------------------
> > *From:* mpich-discuss-bounces at mcs.anl.gov
> > <mailto:mpich-discuss-bounces at mcs.anl.gov>
> > [mailto:mpich-discuss-bounces at mcs.anl.gov
> > <mailto:mpich-discuss-bounces at mcs.anl.gov>] *On Behalf Of *hu
> yaohui
> > *Sent:* Friday, November 13, 2009 3:13 AM
> > *To:* mpich-discuss at mcs.anl.gov <mailto:
> mpich-discuss at mcs.anl.gov>
> > *Subject:* Re: [mpich-discuss]´ð¸´: how to connect MPI_Send with
> > CH3_isend
> >
> > thank you very much!
> > I would really appreciate,if you can give me some detail
> > steps from the implemention of these functions to deploy the
> > mpich2 on my clusters! (such as how to modify the configure file
> > sth like that.)
> > i did a test yesterday ,i install the mpich2 three times
> > ,and then check the make
> > output,chosse ./configure
> --with-device=ch3:sock,--with-device=ch3:sctp
> > and default,
> > does these device means the internode communication channel? and
> > the default intranode communication channel is shared-memory and
> > cann't be modified?
> >
> > thank you very much!
> >
> > best wishes!
> > 2009/11/13 Darius Buntinas <buntinas at mcs.anl.gov
> > <mailto:buntinas at mcs.anl.gov>>
> >
> >
> > The ability to use different networks for different
> > destinations is a feature we intend to add, but it's not
> > yet implemented. Nemesis does use shared-memory
> communication
> > for intranode communication and the network for internode.
> >
> > There are several internal interfaces for porting MPICH2 to
> new
> > platforms or interconnects. This paper has a description of
> the
> > interfaces:
> >
> http://www.mcs.anl.gov/~buntinas/papers/europvmmpi06-nemesis.pdf
> >
> > If you're interested in porting MPICH2 to a new interconnect,
> > you should write a nemesis network module. The API is
> described
> > here:
> >
> http://wiki.mcs.anl.gov/mpich2/index.php/Nemesis_Network_Module_API
> >
> > I hope this helps,
> >
> > -d
> >
> >
> >
> >
> >
> >
> > On 11/12/2009 11:29 PM, yaohui wrote:
> > > Thank you very much!
> > > Finally I got a response!
> > > I am doing some researches on adding HT(HyperTransport)
> > into mpich2,I am
> > > just wandering when we configure the mpich2 with
> > > ./configure ¨Cwith-device=ch3:sock
> > (ch3:nemesis)
> > > that means we definitely select one network as the channel
> > to communicate,
> > > but if there are any possibilities that you can dynamic
> > > control the network we use ,sometimes messages go through
> > Ethernet,
> > > sometimes go through IB. could you give me some materials
> > > to understand what's the difference between
> ~/ch3/channel/sock
> > > ,~/ch3/channel/sctp
> > ~,~/ch3/channel/nemesis/nemesis/net_mod/tcp_module
> > > , ~/ch3/channel/nemesis/nemesis/net_mod/sctp_module,
> > > ~/ch3/channel/nemesis/nemesis/net_mod/ib_module
> > >
> > > Thank you again!
> > >
> > > Best Wishes!
> > > -----ÓʼþÔ¼þ-----
> > > ·¢¼þÈË: mpich-discuss-bounces at mcs.anl.gov
> > <mailto:mpich-discuss-bounces at mcs.anl.gov>
> > [mailto:mpich-discuss-bounces at mcs
> > <mailto:mpich-discuss-bounces at mcs>.
> > > anl.gov <http://anl.gov/>] ´ú±í Darius Buntinas
> > > ·¢ËÍʱ¼ä: 2009Äê11ÔÂ12ÈÕ 23:30
> > > ÊÕ¼þÈË: mpich-discuss at mcs.anl.gov
> > <mailto:mpich-discuss at mcs.anl.gov>
> > > Ö÷Ìâ: Re: [mpich-discuss] how to connect MPI_Send with
> > CH3_isend
> > >
> > > The best way to understand what happens is to step through
> > the calls
> > > with a debugger. The channel is selected in the configure
> > step, and
> > > only that channel will be compiled, so there will be only
> > one copy of
> > > ch3_isend in the binary.
> > >
> > > -d
> > >
> > > On 11/12/2009 05:54 AM, hu yaohui wrote:
> > >> hi guys!
> > >> Could someone tell me the detail calling flow from
> > MPI_Send() to
> > >> CH3_isend()?
> > >> How to control which CH3_isend() is called between
> > sock,sctp and
> > >> shm? thank you very much!
> > >>
> > > best
> > >> wishes!
> > >>
> > >
> > >> loki!
> > >>
> > >>
> > >>
> > >>
> >
> ------------------------------------------------------------------------
> > >>
> > >> _______________________________________________
> > >> mpich-discuss mailing list
> > >> mpich-discuss at mcs.anl.gov <mailto:
> mpich-discuss at mcs.anl.gov>
> > >> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
> > > _______________________________________________
> > > mpich-discuss mailing list
> > > mpich-discuss at mcs.anl.gov <mailto:
> mpich-discuss at mcs.anl.gov>
> > > https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
> > >
> > >
> > >
> > >
> >
> ------------------------------------------------------------------------
> > >
> > > _______________________________________________
> > > mpich-discuss mailing list
> > > mpich-discuss at mcs.anl.gov <mailto:
> mpich-discuss at mcs.anl.gov>
> > > https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
> >
> > _______________________________________________
> > mpich-discuss mailing list
> > mpich-discuss at mcs.anl.gov <mailto:mpich-discuss at mcs.anl.gov>
> > https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
> >
> >
> >
> > _______________________________________________
> > mpich-discuss mailing list
> > mpich-discuss at mcs.anl.gov <mailto:mpich-discuss at mcs.anl.gov>
> > https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
> >
> >
> >
> > ------------------------------------------------------------------------
> >
> > _______________________________________________
> > mpich-discuss mailing list
> > mpich-discuss at mcs.anl.gov
> > https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
>
> _______________________________________________
> mpich-discuss mailing list
> mpich-discuss at mcs.anl.gov
> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/mpich-discuss/attachments/20091116/8431f98f/attachment.htm>
More information about the mpich-discuss
mailing list