[mpich-discuss] unable to start mpd on Scyld Beowulf Cluster compute nodes

Wuyin Lin wuyin.lin at gmail.com
Wed May 21 14:26:27 CDT 2008


Dear Rajeev,

Thanks for the reply. I tried all those options with mpdcheck (e.g., -v, -l,
-pc) after rsh to a node. All quit silently. Maybe the thin kernel for
compute nodes on my Scyld Beowulf (no change since shipped from vendor) is
too thin. Even the error messages may have been suppressed or redirected
somewhere. (But not all, it would know, e.g., if entering a wrong command).
Have any users successfully ported MPICH2 to such system?

Thanks a lot.

Wuyin

On Wed, May 21, 2008 at 2:04 PM, Rajeev Thakur <thakur at mcs.anl.gov> wrote:

>  May be something with the networking configuration on the machines. To
> debug, you can use the mpdcheck utility and follow all the steps described
> in the installation guide.
>
> Rajeev
>
>  ------------------------------
> *From:* owner-mpich-discuss at mcs.anl.gov [mailto:
> owner-mpich-discuss at mcs.anl.gov] *On Behalf Of *Wuyin Lin
> *Sent:* Monday, May 19, 2008 11:26 AM
> *To:* mpich-discuss at mcs.anl.gov
> *Subject:* [mpich-discuss] unable to start mpd on Scyld Beowulf Cluster
> compute nodes
>
>   Hello,
>
> My system is an AMD Opteron cluster running Penguin Computing Scyld Linux
> release 30cz, with a full linux on master node but thin kernel on compute
> nodes. Communication mostly thru bproc. rshd is also enabled on compute
> nodes for a separate mpich1.
>
> After installation of MPICH2, no problem starting mpd on master node. But
> launching mpd always exit silently, using bpsh, or rsh to compute node then
> start it. PYTHONHOME has set properly to import the required libs. Files
> resident on master node are recognized by same path in compute nodes via
> NFS.  Clueless what else needed for such a system to bring up mpd.
>
> Any advices are appreciated. Thank you in advance.
>
>
> Wuyin Lin
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/mpich-discuss/attachments/20080521/81f0c074/attachment.htm>


More information about the mpich-discuss mailing list