[mpich-discuss] Allowing client systems to come and go

Dave Goodell goodell at mcs.anl.gov
Mon Jan 25 14:11:38 CST 2010


The MPI Standard provides mechanisms to support this usage (see  
MPI-2.2 chapter 10, "Process Creation and Management").  However, in  
practice you're just asking for pain if you want to use MPI like this  
in a long-running fashion.  MPICH2 (and basically all other MPI  
implementations AFAICT) doesn't support dynamic processes in a robust  
enough way to be able to use on a long-running basis.

On top of the dynamic process support, you'd be mixing in heterogenous  
system support (Windows/Linux processes talking to each other), which  
has its own set of issues.

You're welcome to give it a shot, but I definitely wouldn't call it a  
practical approach under current conditions.

-Dave

On Jan 25, 2010, at 1:59 PM, Hiatt, Dave M wrote:

> It is advantageous for us to use MPICH2 to handle the setup and  
> status traffic between the client computers that allow users to  
> enter data and initiate runs and the gateway system to our cluster.   
> The client systems are Windows 7, the gateway box is Centos 5.4, as  
> is the cluster.  I thought I saw some references to how to allow  
> systems to attach to and then drop out of communicators on an ad hoc  
> basis.  The Linux gateway system will be up 24/7 and can host all  
> the communicators, or participate in them depending on the best  
> approach.  So can someone refresh my memory, did I see a discussion  
> on this topic within the last 6 months.
>
> To start with there will be about 25 client systems, so, is this a  
> practical approach?  Thanks
>
> dave
>
>
> "Consequences, Schmonsequences, as long as I'm rich". - Daffy Duck
> Dave Hiatt
> Market Risk Systems
> CitiMortgage, Inc.
> 1000 Technology Dr.
> Third Floor East, M.S. 55
> O'Fallon, MO 63368-2240
>
> Phone:  636-261-1408
> Mobile: 314-452-9165
> FAX:    636-261-1312
> Email:     Dave.M.Hiatt at citigroup.com
>
>
>
>
> _______________________________________________
> mpich-discuss mailing list
> mpich-discuss at mcs.anl.gov
> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss



More information about the mpich-discuss mailing list