In the installation that we have tried here, there definitely is a folder named bin/ that contains mpirun and mpiexec which point to mpiexec.py. I have modified PATH so that when I issue "which mpirun" I get the mpirun in mpich2 installation. So if I use mpirun from command line instead of bsub, it calls the mpirun from mpich2 directory and runs correctly on one node (with 8 procs).<br clear="all">
<br>Gauri.<br>---------<br>
<br><br><div class="gmail_quote">On Thu, Mar 5, 2009 at 8:34 PM, Dave Goodell <span dir="ltr"><<a href="mailto:goodell@mcs.anl.gov">goodell@mcs.anl.gov</a>></span> wrote:<br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<div class="im">On Mar 5, 2009, at 8:23 AM, Gauri Kulkarni wrote:<br>
<br>
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
Of course, using the same command - only with mpirun -srun ./helloworld.mpich2 - still gives me the error that no mpd is running on the node on which the job was assigned.<br>
<br>
Does this mean, I do not need to use mpirun when MPICH2 is configured with SLURM? What about the softwares that make specific call to mpirun or mpiexec?<br>
</blockquote>
<br></div>
I believe that the mpirun command is for the HP MPI packaged with your cluster and will not work with MPICH2. You'll have to figure out how to handle each usage of mpiexec or mpirun on a case-by-case basis. One strategy for fixing this is to create a shell script named "mpirun" or "mpiexec" that simply invokes the appropriate bsub/srun commands and then figure out how to set the PATH correctly for those applications so that your custom mpirun is called instead of the system one.<br>
<font color="#888888">
<br>
-Dave<br>
<br>
</font></blockquote></div><br>