[MPICH] MPICH2 on Windows XP Workgroups and Samba networks

Calin Iaru calin at dolphinics.no
Wed Apr 5 04:44:00 CDT 2006


Hi Etienne,

    probably there are cheaper ways to do this. For instance, the server 
node can be a Linux box. Or it may be that the smpd service or mpiexec 
creates too many pipes. This problem does not occur on the older MPICH 
1.2. Finding the cause for it would be interesting.
    In the meantime I connected all nodes to a Server 2003 domain. It 
was already there, and the process went fine.

Best regards,
    Calin


DevTeam wrote:
> Hi Calin,
>
> As far as I know it's a way found by Microsoft to avoid people using a 
> simple XP licence to administrate a server. If you want to have more 
> than 10 connection you need to have windows 2003 server installed on 
> your machine which is much more expensive ...
>
> Regards,
>
> Etienne
>
> ----- Original Message ----- From: "Calin Iaru" <calin at dolphinics.no>
> To: <peter_raeth at juno.com>
> Cc: <mpich-discuss at mcs.anl.gov>
> Sent: Tuesday, April 04, 2006 4:27 PM
> Subject: Re: [MPICH] MPICH2 on Windows XP Workgroups and Samba networks
>
>
>> Hi Petter,
>>
>>
>>    suppose you run from mycomputer
>>        mpiexec -n 11 \\mycomputer\c$\cpi.exe whit ssm channel 
>> enabled. You will have 11 machines trying to connect to the UNC path 
>> \\mycomputer. There is a knowledge base article about Windows XP 
>> accepting no more than 10 connections from remote machines. These 
>> connections are of any type: pipes, mapped drives and more. For 
>> reference, see http://support.microsoft.com/?scid=kb;en-us;314882
>>
>>    This does not mean that the cluster size is limited to 10 XP 
>> machines. It just means that the path from which the test is launched 
>> should not belong to any XP node.
>>
>> Best regards,
>>    Calin
>>
>> peter_raeth at juno.com wrote:
>>> This is a most interesting comment, "Windows XP which can handle no 
>>> more than 10 remote connections."  Does this mean that MPICH2 
>>> Windows XP clusters are limited to 10 nodes?
>>>
>>
>>
>>
>
>




More information about the mpich-discuss mailing list