[mpich-discuss] ´ð¸´: how to connect MPI_Send with CH3_isend

hu yaohui loki2441 at gmail.com
Sat Nov 14 03:07:19 CST 2009


Thank you very much ,i am just wondering if i implement two internode
communication channel,can i control some kinds of messge goes through one
channel while other messge goes through goes through other channel(not one
kind of message goes through different channels) thank you very much!

On Sat, Nov 14, 2009 at 2:32 AM, Rajeev Thakur <thakur at mcs.anl.gov> wrote:

>  If you use the default Nemesis channel (ch3:nemesis), communication
> within a node goes over shared memory and across nodes over some
> communication channel.
>
> Rajeev
>
>  ------------------------------
> *From:* mpich-discuss-bounces at mcs.anl.gov [mailto:
> mpich-discuss-bounces at mcs.anl.gov] *On Behalf Of *hu yaohui
> *Sent:* Friday, November 13, 2009 3:13 AM
> *To:* mpich-discuss at mcs.anl.gov
> *Subject:* Re: [mpich-discuss]´ð¸´: how to connect MPI_Send with CH3_isend
>
>  thank you very much!
>    I would really appreciate,if you can give me some detail steps from the
> implemention of these functions to deploy the mpich2 on my clusters! (such
> as how to modify the configure file sth like that.)
>     i did a test yesterday ,i install the mpich2 three times ,and then
> check the make
> output,chosse ./configure --with-device=ch3:sock,--with-device=ch3:sctp and
> default,
> does these device means the internode communication channel? and the
> default intranode communication channel is shared-memory and cann't be
> modified?
>
> thank you very much!
>
> best wishes!
> 2009/11/13 Darius Buntinas <buntinas at mcs.anl.gov>
>
>>
>> The ability to use different networks for different
>> destinations is a feature we intend to add, but it's not
>> yet implemented.  Nemesis does use shared-memory communication
>> for intranode communication and the network for internode.
>>
>> There are several internal interfaces for porting MPICH2 to new
>> platforms or interconnects.  This paper has a description of the
>> interfaces:
>> http://www.mcs.anl.gov/~buntinas/papers/europvmmpi06-nemesis.pdf
>>
>> If you're interested in porting MPICH2 to a new interconnect,
>> you should write a nemesis network module.  The API is described
>> here:
>> http://wiki.mcs.anl.gov/mpich2/index.php/Nemesis_Network_Module_API
>>
>> I hope this helps,
>>
>> -d
>>
>>
>>
>>
>>
>>
>> On 11/12/2009 11:29 PM, yaohui wrote:
>> > Thank you very much!
>> >       Finally I got a response!
>> > I am doing some researches on adding HT(HyperTransport) into mpich2,I am
>> > just wandering when we configure the mpich2 with
>> >                       ./configure ¨Cwith-device=ch3:sock (ch3:nemesis)
>> > that means we definitely select one network as the channel to
>> communicate,
>> > but if there are any possibilities that you can dynamic
>> > control the network we use ,sometimes messages go through Ethernet,
>> > sometimes go through IB. could you give me some materials
>> > to understand what's the difference between ~/ch3/channel/sock
>> > ,~/ch3/channel/sctp ~,~/ch3/channel/nemesis/nemesis/net_mod/tcp_module
>> > , ~/ch3/channel/nemesis/nemesis/net_mod/sctp_module,
>> > ~/ch3/channel/nemesis/nemesis/net_mod/ib_module
>> >
>> > Thank you again!
>> >
>> > Best Wishes!
>> > -----ÓʼþÔ­¼þ-----
>> > ·¢¼þÈË: mpich-discuss-bounces at mcs.anl.gov [mailto:
>> mpich-discuss-bounces at mcs.
>> > anl.gov] ´ú±í Darius Buntinas
>> > ·¢ËÍʱ¼ä: 2009Äê11ÔÂ12ÈÕ 23:30
>> > ÊÕ¼þÈË: mpich-discuss at mcs.anl.gov
>> > Ö÷Ìâ: Re: [mpich-discuss] how to connect MPI_Send with CH3_isend
>> >
>> > The best way to understand what happens is to step through the calls
>> > with a debugger.  The channel is selected in the configure step, and
>> > only that channel will be compiled, so there will be only one copy of
>> > ch3_isend in the binary.
>> >
>> > -d
>> >
>> > On 11/12/2009 05:54 AM, hu yaohui wrote:
>> >> hi guys!
>> >>     Could someone tell me the detail calling flow from MPI_Send() to
>> >> CH3_isend()?
>> >>     How to control which CH3_isend() is called between sock,sctp and
>> >> shm? thank you very much!
>> >>
>> > best
>> >> wishes!
>> >>
>> >
>> >> loki!
>> >>
>> >>
>> >>
>> >>
>> ------------------------------------------------------------------------
>> >>
>> >> _______________________________________________
>> >> mpich-discuss mailing list
>> >> mpich-discuss at mcs.anl.gov
>> >> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
>> > _______________________________________________
>> > mpich-discuss mailing list
>> > mpich-discuss at mcs.anl.gov
>> > https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
>> >
>> >
>> >
>> > ------------------------------------------------------------------------
>> >
>> > _______________________________________________
>> > mpich-discuss mailing list
>> > mpich-discuss at mcs.anl.gov
>> > https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
>>
>> _______________________________________________
>> mpich-discuss mailing list
>> mpich-discuss at mcs.anl.gov
>> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
>>
>>
>
> _______________________________________________
> mpich-discuss mailing list
> mpich-discuss at mcs.anl.gov
> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/mpich-discuss/attachments/20091114/0ed8e72e/attachment-0001.htm>


More information about the mpich-discuss mailing list