[mpich-discuss] building 32 bit shmem on 64 bit linux system
Rajeev Thakur
thakur at mcs.anl.gov
Thu Jun 26 16:50:20 CDT 2008
In the outputs of the configure and make steps, does it look like it is
building ch_p4 or ch_shmem?
Rajeev
> -----Original Message-----
> From: owner-mpich-discuss at mcs.anl.gov
> [mailto:owner-mpich-discuss at mcs.anl.gov] On Behalf Of Daniel
> Fetchinson
> Sent: Thursday, June 26, 2008 4:47 PM
> To: mpich-discuss at mcs.anl.gov
> Subject: Re: [mpich-discuss] building 32 bit shmem on 64 bit
> linux system
>
> Out of desperation I looked at the output of
>
> strings my_test_program
>
> where my_test_program was compiled using mpicc (using absolute paths)
> from the 32-bit shmem mpi library. Strangely, ch_p4 appears in the
> output several times while ch_shmem doesn't. On another machine where
> I'm successfully compiling 32-bit shmem executables the output of
> 'strings my_test_program' only contains ch_shmem and not ch_p4.
>
> So there must be something wrong with my mpicc, although I can't
> imagine what, I never compiled to ch_p4.
>
> Thanks for the advice guys, there seems to be progress :)
> Daniel
>
>
> > Did you do a "make install" after "make" ? If not, the
> install directory
> > may still have an older installation of mpich-1 which could be ch_p4
> > and not ch_shmem that you need.
> >
> > A.Chan
> >
> > ----- "Daniel Fetchinson" <fetchinson at googlemail.com> wrote:
> >
> >> > It must be the mpirun then. Make sure you use the mpirun from the
> >> same
> >> > build. Give the full path if necessary.
> >> >
> >> > Rajeev
> >>
> >> Thanks, but there is only one installation of mpi on the
> machine and
> >> that installation is the one I keep compiling. So even if it wanted
> >> to
> >> execute the wrong mpicc/mpirun/etc it couldn't because
> there is only
> >> 1
> >> copy at any given time.
> >>
> >> Nevertheless I recompiled now mpi as well as my program and still I
> >> get the same error:
> >>
> >> 0 - MPI_INIT : MPIRUN chose the wrong device ch_shmem;
> program needs
> >> device ch_p4
> >> /home/fetchinson/mpich32shmem/bin/mpirun.ch_shmem: line 91: 21976
> >> Segmentation fault /home/fetchinson/test "testinput"
> >>
> >> Any ideas?
> >> Daniel
> >>
> >>
> >>
> >> >> > Did you recompile the program? Make sure you use the mpicc
> >> >> from this
> >> >> > build and not some other mpicc from another directory. Give
> >> >> the full
> >> >> > path if necessary.
> >> >> >
> >> >> > Rajeev
> >> >>
> >> >> Yes, I did recompile the program using mpicc from the newly
> >> >> built mpi library. Actually, there is only one mpicc on the
> >> >> whole machine.
> >> >>
> >> >> Cheers,
> >> >> Daniel
> >> >>
> >> >>
> >> >>
> >> >> >> Actually, I spoke a bit too soon. Configure, make, make
> >> >> install all
> >> >> >> work but the code still doesn't run, saying:
> >> >> >>
> >> >> >> 0 - MPI_INIT : MPIRUN chose the wrong device ch_shmem;
> >> >> program needs
> >> >> >> device ch_p4
> >> >> >> /home/fetchinson/mpi32shmem/bin/mpirun.ch_shmem:
> line 91: 17761
> >> >> >> Segmentation fault /home/fetchinson/test "testinput"
> >> >> >>
> >> >> >> The exact same code runs on another machine where
> someone else
> >> >> >> installed a 32-bit shmem mpich1 library, I'm trying to do
> >> >> the same on
> >> >> >> a different machine now. It's an intel quad core so I suppose
> >> the
> >> >> >> shmem device should work. The configure options I used for
> >> >> compiling
> >> >> >> are
> >> >> >>
> >> >> >> export CC='gcc -m32'
> >> >> >> export F77='g77 -m32'
> >> >> >> ./configure --with-device=ch_shmem
> >> >> >>
> >> >> >> Do I need to specify the --with-arch option? Going through
> >> >> the list
> >> >> >> of available --with-arch options I'm not sure what I need for
> >> an
> >> >> >> Intel(R)
> >> >> >> Core(TM)2 Quad CPU Q6600 @ 2.40GHz.
> >> >> >>
> >> >> >> What am I doing wrong?
> >> >> >>
> >> >> >> Cheers,
> >> >> >> Daniel
> >> >> >>
> >> >> >>
> >> >> >>
> >> >> >> > Thanks Anthony, that worked!
> >> >> >> >
> >> >> >> > Concerning MPICH2: I'm not in a position to change the
> >> >> >> communication
> >> >> >> > routines in the code I use so this would only be an option
> >> >> >> if MPICH2
> >> >> >> > is fully backward compatible with MPICH1. Is this the case?
> >> >> >> Can I just
> >> >> >> > start using MPICH2 without changing anything in the code if
> >> >> >> it works
> >> >> >> > with MPICH1?
> >> >> >> >
> >> >> >> > Cheers,
> >> >> >> > Daniel
> >> >> >> >
> >> >> >> >
> >> >> >> >> Since you are using MPICH-1, you can try setting
> CC, F77 and
> >> F90
> >> >> >> >>
> >> >> >> >> CC="gcc -m32"
> >> >> >> >> F77="g77 -m32"
> >> >> >> >> ....
> >> >> >> >>
> >> >> >> >> BTW, can you use MPICH2 instead ? We have shifted all our
> >> >> >> >> development effort to MPICH2 which is more robust than
> >> >> >> MPICH-1. If
> >> >> >> >> you are using MPICH2, setting CFLAGS and FFLAGS should
> >> >> >> work, and you
> >> >> >> >> can use nemesis which shared memory for intranode
> >> communication.
> >> >> >> >>
> >> >> >> >> A.Chan
> >> >> >> >>
> >> >> >> >> ----- "Daniel Fetchinson"
> <fetchinson at googlemail.com> wrote:
> >> >> >> >>
> >> >> >> >>> Hi folks,
> >> >> >> >>>
> >> >> >> >>> I'd like to build the 32-bit shmem mpi library because
> >> >> >> the code I'll
> >> >> >> >>> use with mpi only works in 32-bit because of some
> >> >> 32-bit specific
> >> >> >> >>> assembly code. The machine is a 64-bit machine with a
> >> >> >> 64-bit linux
> >> >> >> >>> distribution (suse).
> >> >> >> >>>
> >> >> >> >>> I tried configuring mpich-1.2.7p1 in a number of ways:
> >> >> >> >>>
> >> >> >> >>> ./configure --with-device=ch_shmem ./configure
> >> >> --with-arch=LINUX
> >> >> >> >>> --with-device=ch_shmem ./configure
> >> >> >> >>> --with-arch=LINUX32 --with-device=ch_shmem ./configure
> >> >> >> >>> --with-arch=i386 --with-device=ch_shmem
> >> >> >> >>>
> >> >> >> >>> with and without setting the following environment
> >> >> variables (in
> >> >> >> >>> bash):
> >> >> >> >>>
> >> >> >> >>> export CFLAGS=-m32
> >> >> >> >>> export FFLAGS=-m32
> >> >> >> >>>
> >> >> >> >>> Configuring went all right but none of the above
> >> >> >> combinations worked
> >> >> >> >>> with 'make' unfortunately :( The error in 'make' is
> >> >> the following:
> >> >> >> >>>
> >> >> >> >>> /home/fetchinson/mpich-1.2.7p1/bin/mpicc -o overtake
> >> >> overtake.o
> >> >> >> >>> test.o
> >> >> >> >>>
> >> >> >>
> /usr/lib64/gcc/x86_64-suse-linux/4.2.1/../../../../x86_64-suse
> >> >> >> -linux/bin/ld:
> >> >> >> >>> skipping incompatible
> >> >> >> /home/fetchinson/mpich-1.2.7p1/lib/libmpich.a
> >> >> >> >>> when searching for -lmpich
> >> >> >> >>>
> >> >> >>
> /usr/lib64/gcc/x86_64-suse-linux/4.2.1/../../../../x86_64-suse
> >> >> >> -linux/bin/ld:
> >> >> >> >>> cannot find -lmpich
> >> >> >> >>> collect2: ld returned 1 exit status
> >> >> >> >>> make[4]: *** [overtake] Error 1
> >> >> >> >>> make[3]: [linktest1] Error 2 (ignored) Could not link
> >> >> a C program
> >> >> >> >>> with MPI libraries
> >> >> >> >>> make[3]: *** [linktest1] Error 1
> >> >> >> >>> make[2]: *** [linktest] Error 2
> >> >> >> >>> make[1]: *** [mpi-lib-test] Error 2
> >> >> >> >>> make: *** [mpi] Error 2
> >> >> >> >>>
> >> >> >> >>> Which makes me believe that I was not able to
> convince the
> >> >> >> >>> compiler/linker/etc that I really want everything in
> >> 32-bit.
> >> >> >> >>>
> >> >> >> >>> What would be the correct way of compiling the whole mpi
> >> >> >> library to
> >> >> >> >>> 32-bit?
> >> >> >> >>>
> >> >> >> >>> Cheers,
> >> >> >> >>> Daniel
> >> >> >> >>
> >> >> >> >>
> >> >> >> >
> >> >> >> >
> >> >> >> > --
> >> >> >> > Psss, psss, put it down! -
> http://www.cafepress.com/putitdown
> >> >> >> >
> >> >> >>
> >> >> >>
> >> >> >> --
> >> >> >> Psss, psss, put it down! - http://www.cafepress.com/putitdown
> >> >> >>
> >> >> >>
> >> >> >
> >> >> >
> >> >>
> >> >>
> >> >> --
> >> >> Psss, psss, put it down! - http://www.cafepress.com/putitdown
> >> >>
> >> >>
> >> >
> >> >
> >>
> >>
> >> --
> >> Psss, psss, put it down! - http://www.cafepress.com/putitdown
> >
> >
>
>
> --
> Psss, psss, put it down! - http://www.cafepress.com/putitdown
>
>
More information about the mpich-discuss
mailing list