Not to worry about the "English". Many people in the world feel that we in the USA do not speak "English" anyway. :) My own setup is very similar to yours and I start my jobs the same way. The master node can allocate a portion of the workload to itself. Alternatively, the master node can just spread data and collect results. The master node can be any node in the cluster, not just the console node. Something that could be done is to use the console node to log onto some node in the cluster. That node could be used to start mpd and mpiexec with some number of nodes that do not involve the console node. Being at the intermediate level, at best, I have not tried to start an MPICH job running without one of the nodes being designated the master node. One of the nodes has to decide about workload allocation and generally get things going. On the other hand, from a logical perspective, there surely must be a way for all nodes to simply start running exactly the same program. Each node would establish its own place in the cluster and begin work. Of course, the program running on each node should not have to be the same. The nodes just need to know who else is in the cluster and how to communicate what to which node at what point in its process. This is not really an MPI issue in my mind. Rather, it is a matter of programming different independent processes to participate in a "cloud" of activity. Lets continue this conversation so we get a better understanding of what you are trying to accomplish. Best, Peter.