<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2//EN">
<HTML>
<HEAD>
<META HTTP-EQUIV="Content-Type" CONTENT="text/html; charset=us-ascii">
<META NAME="Generator" CONTENT="MS Exchange Server version 6.5.7036.0">
<TITLE>RE: [mpich-discuss] Suggestions solicited</TITLE>
</HEAD>
<BODY>
<!-- Converted from text/plain format -->
<P><FONT SIZE=2>Hi,<BR>
What you need is the server to do a MPI_Comm_accept() to accept connection from volatile clients. MPI_Comm_accept() creates a new communicator which can be used for communicating with the client. This communicator can be disconnected without doing a MPI_Finalize().<BR>
A simple example (serial server) of this type of scenario is given below,<BR>
<BR>
#################### server ###################<BR>
<BR>
#include "mpi.h"<BR>
#include <stdio.h><BR>
<BR>
#define NUM_CLIENTS 10<BR>
int main(int argc, char *argv[]){<BR>
int i;<BR>
char portName[MPI_MAX_PORT_NAME];<BR>
MPI_Init(&argc, &argv);<BR>
MPI_Open_port(MPI_INFO_NULL, portName);<BR>
printf("portName = %s\n", portName);<BR>
for(i = 0; i < NUM_CLIENTS; i++){<BR>
MPI_Comm newcomm;<BR>
MPI_Comm_accept(portName, MPI_INFO_NULL, 0, MPI_COMM_WORLD, &newcomm);<BR>
printf("Accepted a connection\n");<BR>
MPI_Comm_disconnect(&newcomm);<BR>
}<BR>
MPI_Close_port(portName);<BR>
MPI_Finalize();<BR>
}<BR>
<BR>
#################### server ###################<BR>
#################### client ###################<BR>
<BR>
#include "mpi.h"<BR>
#include <stdio.h><BR>
<BR>
#define NUM_CLIENTS 10<BR>
int main(int argc, char *argv[]){<BR>
char portName[MPI_MAX_PORT_NAME];<BR>
MPI_Comm newcomm;<BR>
<BR>
MPI_Init(&argc, &argv);<BR>
printf("portname : ");<BR>
gets(portName);<BR>
MPI_Comm_connect(portName, MPI_INFO_NULL, 0, MPI_COMM_WORLD, &newcomm);<BR>
printf("Connected to %s\n", portName);<BR>
MPI_Comm_disconnect(&newcomm);<BR>
MPI_Finalize();<BR>
}<BR>
<BR>
#################### client ###################<BR>
<BR>
You can do more sophisticated stuff by writing the port to a file etc (Unfortunately publishing and lookingup names won't work with smpd for now).<BR>
As long as you use SMPD as the process manager (default on windows & configure with "--with-pm=smpd" on unix) & make sure that all the machines have the same data model (eg: same architecture) you should be fine.<BR>
<BR>
Regards,<BR>
Jayesh<BR>
<BR>
-----Original Message-----<BR>
From: mpich-discuss-bounces@mcs.anl.gov [<A HREF="mailto:mpich-discuss-bounces@mcs.anl.gov">mailto:mpich-discuss-bounces@mcs.anl.gov</A>] On Behalf Of Hiatt, Dave M<BR>
Sent: Thursday, December 04, 2008 10:52 AM<BR>
To: mpich-discuss@mcs.anl.gov<BR>
Subject: [mpich-discuss] Suggestions solicited<BR>
<BR>
My topology is as follows; I have my MPI cluster on pile of Linux workstations, I have network access from an AIX server and from a bunch of Windows workstations. Right now to trigger a run, I have to Rlogin to the node 0 Linux box and run mpiexec to start. <BR>
<BR>
My plan is to convert the parallel application into a "service" hanging blocking recv message with the material calculation details for the next set of calculations. <BR>
<BR>
The idea would be to let the cluster receive requests from any of the Windows work stations. <BR>
<BR>
My first idea, was to have a workstation become part of the cluster in a special communicator, that is start a process, MPI::INIT, send a message to node 0 with the calculation data, and Finalize, and cease execution. Then later return repeat the steps and retrieve results. But it was pointed out to me that the only official method that can operate after MPI::Finalize is run is MPI::Finalized, I wondered if the following is possible.<BR>
<BR>
If the process on the workstation that I propose to be volatile runs, and terminates and a new process comes back to request the results, is that within the bounds of the standard and a supported approach?<BR>
<BR>
Also, as long as the hardware the Linux and Windows workstations are running on are the same "Endian", would this still be considered a homogenous MPI cluster, or is there any issue between Windows and Linux? <BR>
<BR>
How in general do others approach this, that being, allowing workstations send in new requests for runs and later come back and retrieve the results.<BR>
<BR>
<BR>
If you lived here you'd be home by now<BR>
Dave Hiatt<BR>
Manager, Market Risk Systems Integration CitiMortgage, Inc.<BR>
1000 Technology Dr.<BR>
Third Floor East, M.S. 55<BR>
O'Fallon, MO 63368-2240<BR>
<BR>
Phone: 636-261-1408<BR>
Mobile: 314-452-9165<BR>
FAX: 636-261-1312<BR>
Email: Dave.M.Hiatt@citigroup.com<BR>
<BR>
<BR>
<BR>
</FONT>
</P>
</BODY>
</HTML>