[MPICH] Heterogenous cluster MPICH2

Rajeev Thakur thakur at mcs.anl.gov
Wed Jun 22 20:54:56 CDT 2005


> I am surprised that MPICH2 cannot handle the endianess issue.

It is not implemented yet in MPICH2, but we intend to support it just as we
did in MPICH-1. Can't say when it will be available though.

Rajeev
 

> -----Original Message-----
> From: owner-mpich-discuss at mcs.anl.gov 
> [mailto:owner-mpich-discuss at mcs.anl.gov] On Behalf Of Philip 
> Sydney Lavers
> Sent: Wednesday, June 22, 2005 7:53 PM
> To: Syed Irfan; robl at mcs.anl.gov; mpich-discuss at mcs.anl.gov
> Subject: Re: [MPICH] Heterogenous cluster MPICH2
> 
> hello Irfan and Rob and folks
> 
> >
> >Is it possible for you to verify if the same problem happens
> when you 
> >communicate between 2 sparc64's? If it does not then there
> might be 
> >endian issues with mpich2 implementation.
> >
>  
> I have run out of computers that I have any control over. It
> will take time to get others to install MPICH2 on other
> machines which I may get an account on. One candidate is a Mac
> G5, by the way. 
> 
> >It looks like you are trying to get proton and paul1 to talk
> to each
> >other.  sparc64 and athlon64 are different endian >architectures:
> 
> >http://www.mcs.anl.gov/web-mail-archive/lists/mpich-discuss/2
005/06/msg00060.html
> 
> >Am I misunderstanding your problem?  
> 
> No - that is my problem, but I was only trying to assist Irfan.
> 
> "I'm allright Jack" because my own cluster is and will be
> homogeneous - all AMD 64 bit processors.
> 
> I am surprised that MPICH2 cannot handle the endianess issue.
> 
> I installed MPICH2 because I need mpi, ultimately for
> CRYSTAL03, and the "web talk" seemed to encourage MPI-2 in
> preference to MPI. I bought the book "Using MPI-2" and wrote
> my programmes with due recognition of the warnings about the
> size of displacement units on page 147. This meant that I
> could mix athlon 32 bit and athlon64 64 bit processors. I
> simply assumed that endianess would be accounted for in the
> same way - it is no problem in pvm.
> 
> regards,
> 
> Phil Lavers
> 
> 




More information about the mpich-discuss mailing list