disk space requirement for mpich2 during PETSc library compilation
Barry Smith
bsmith at mcs.anl.gov
Mon Jan 15 10:17:57 CST 2007
On Mon, 15 Jan 2007, Ben Tay wrote:
> Hi,
>
> I've tried to use the shared version of mpich2 which I installed seperately
> (due to above problem) with PETSc. The command is
>
> ./config/configure.py --with-fc=/lsftmp/g0306332/inter/fc/bin/ifort
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Do not use this option; since you pass in --with-mpi-dir= it will use
the proper mpif90 script.
Barry
> --with-blas-lapack-dir=/lsftmp/g0306332/inter/mkl/
> --with-mpi-dir=/lsftmp/g0306332/mpich2-l32 --with-x=0 --with-shared
>
> During the test, this error msg was shown:
>
> gcc -c -fPIC -Wall -Wwrite-strings -g3
> -I/nas/lsftmp/g0306332/petsc-2.3.2-p8-I/nas/lsftmp/g0306332/petsc-
> 2.3.2-p8/bmake/linux-mpich2 -I/nas/lsftmp/g0306332/petsc-2.3.2-p8/include
> -I/lsftmp/g0306332/mpich2-l32/include
> -D__SDIR__="src/snes/examples/tutorials/" ex19.c
> gcc -fPIC -Wall -Wwrite-strings -g3 -o ex19
> ex19.o-Wl,-rpath,/nas/lsftmp/g0306332/petsc-
> 2.3.2-p8/lib/linux-mpich2
> -L/nas/lsftmp/g0306332/petsc-2.3.2-p8/lib/linux-mpich2
> -lpetscsnes -lpetscksp -lpetscdm -lpetscmat -lpetscvec -lpetsc
> -Wl,-rpath,/lsftmp/g0306332/mpich2-l32/lib -L/lsftmp/g0306332/mpich2-l32/lib
> -lmpich -lnsl -laio -lrt -Wl,-rpath,/lsftmp/g0306332/inter/mkl/lib/32
> -L/lsftmp/g0306332/inter/mkl/lib/32 -lmkl_lapack -lmkl_def -lguide -lvml
> -lpthread -lm -Wl,-rpath,/usr/lib/gcc-lib/i386-pc-linux/3.2.3
> -L/usr/lib/gcc-lib/i386-pc-linux/3.2.3
> -Wl,-rpath,/usr/lib/gcc-lib/i386-pc-linux/3.2.3/../../..
> -L/usr/lib/gcc-lib/i386-pc-linux/3.2.3/../../.. -ldl -lgcc_eh
> -Wl,-rpath,"/usr/lib/gcc-lib/i386-pc-linux/3.2.3"
> -Wl,-rpath,"/usr/lib/gcc-lib/i386-pc-linux/3.2.3/../../.."
> -Wl,-rpath,/lsftmp/g0306332/inter/fc/lib -L/lsftmp/g0306332/inter/fc/lib
> -Wl,-rpath,/usr/lib/gcc-lib/i386-pc-linux/3.2.3/
> -L/usr/lib/gcc-lib/i386-pc-linux/3.2.3/
> -Wl,-rpath,/usr/lib/gcc-lib/i386-pc-linux/3.2.3/../../../
> -L/usr/lib/gcc-lib/i386-pc-linux/3.2.3/../../../ -lifport -lifcore -limf -lm
> -lipgo -lirc -lgcc_s -lirc_s -Wl,-rpath,/usr/lib/gcc-lib/i386-pc-linux/3.2.3
> -Wl,-rpath,/usr/lib/gcc-lib/i386-pc-linux/3.2.3/../../.. -lm
> -Wl,-rpath,/usr/lib/gcc-lib/i386-pc-linux/3.2.3
> -L/usr/lib/gcc-lib/i386-pc-linux/3.2.3
> -Wl,-rpath,/usr/lib/gcc-lib/i386-pc-linux/3.2.3/../../..
> -L/usr/lib/gcc-lib/i386-pc-linux/3.2.3/../../.. -ldl -lgcc_eh -ldl
> /nas/lsftmp/g0306332/petsc-2.3.2-p8/lib/linux-mpich2/libpetsc.so: undefined
> reference to `mpi_conversion_fn_null_'
> collect2: ld returned 1 exit status
>
> Then I tried to my fortran example.
>
> I realised that if I compile using ifort (using command similar to "make
> ex1f", there is no problem but the result shown that 4 processors are
> running 4 *individual* codes.
>
> I then tried to use the mpif90 in the mpich2 directory. compiling was ok but
> during linking the error was:
>
> /lsftmp/g0306332/mpich2-l32/bin/mpif90 -fPIC -g -o ex2f ex2f.o
> -Wl,-rpath,/nas/lsftmp/g0306332/petsc-2.3.2-p8/lib/linux-mpich2
> -L/nas/lsftmp/g0306332/petsc-2.3.2-p8/lib/linux-mpich2 -lpetscksp -lpetscdm
> -lpetscmat -lpetscvec -lpetsc -Wl,-rpath,/lsftmp/g0306332/mpich2-l32/lib
> -L/lsftmp/g0306332/mpich2-l32/lib -lmpich -lnsl -laio -lrt
> -Wl,-rpath,/lsftmp/g0306332/inter/mkl/lib/32
> -L/lsftmp/g0306332/inter/mkl/lib/32 -lmkl_lapack -lmkl_def -lguide -lvml
> -lpthread -lm -Wl,-rpath,/usr/lib/gcc-lib/i386-pc-linux/3.2.3
> -L/usr/lib/gcc-lib/i386-pc-linux/3.2.3
> -Wl,-rpath,/usr/lib/gcc-lib/i386-pc-linux/3.2.3/../../..
> -L/usr/lib/gcc-lib/i386-pc-linux/3.2.3/../../.. -ldl -lgcc_eh
> -Wl,-rpath,"/usr/lib/gcc-lib/i386-pc-linux/3.2.3"
> -Wl,-rpath,"/usr/lib/gcc-lib/i386-pc-linux/3.2.3/../../.."
> -Wl,-rpath,/lsftmp/g0306332/inter/fc/lib -L/lsftmp/g0306332/inter/fc/lib
> -Wl,-rpath,/usr/lib/gcc-lib/i386-pc-linux/3.2.3/
> -L/usr/lib/gcc-lib/i386-pc-linux/3.2.3/
> -Wl,-rpath,/usr/lib/gcc-lib/i386-pc-linux/3.2.3/../../../
> -L/usr/lib/gcc-lib/i386-pc-linux/3.2.3/../../../ -lifport -lifcore -limf -lm
> -lipgo -lirc -lgcc_s -lirc_s -Wl,-rpath,/usr/lib/gcc-lib/i386-pc-linux/3.2.3
> -Wl,-rpath,/usr/lib/gcc-lib/i386-pc-linux/3.2.3/../../.. -lm
> -Wl,-rpath,/usr/lib/gcc-lib/i386-pc-linux/3.2.3
> -L/usr/lib/gcc-lib/i386-pc-linux/3.2.3
> -Wl,-rpath,/usr/lib/gcc-lib/i386-pc-linux/3.2.3/../../..
> -L/usr/lib/gcc-lib/i386-pc-linux/3.2.3/../../.. -ldl -lgcc_eh -ldl
> /lsftmp/g0306332/inter/fc/lib/libifcore.so.5: undefined reference to
> `__clog10q'
> /lsftmp/g0306332/inter/fc/lib/libifcore.so.5: undefined reference to
> `__cexp10q'
> /lsftmp/g0306332/inter/fc/lib/libifcore.so.5: undefined reference to
> `__csqrtq'
> /lsftmp/g0306332/inter/fc/lib/libifcore.so.5: undefined reference to
> `__ccoshq'
> /lsftmp/g0306332/inter/fc/lib/libifcore.so.5: undefined reference to
> `__ctanhq'
> /lsftmp/g0306332/inter/fc/lib/libifcore.so.5: undefined reference to
> `__ccosq'
> /lsftmp/g0306332/inter/fc/lib/libifcore.so.5: undefined reference to
> `__clogq'
> /lsftmp/g0306332/inter/fc/lib/libifcore.so.5: undefined reference to
> `__csinhq'
> /lsftmp/g0306332/inter/fc/lib/libifcore.so.5: undefined reference to
> `__ctanq'
> /lsftmp/g0306332/inter/fc/lib/libifcore.so.5: undefined reference to
> `__cpowq'
> /lsftmp/g0306332/inter/fc/lib/libifcore.so.5: undefined reference to
> `__exp10q'
> /lsftmp/g0306332/inter/fc/lib/libifcore.so.5: undefined reference to
> `__cexpq'
> /lsftmp/g0306332/inter/fc/lib/libifcore.so.5: undefined reference to
> `__cabsq'
> /lsftmp/g0306332/inter/fc/lib/libifcore.so.5: undefined reference to
> `__fabsq'
> /lsftmp/g0306332/inter/fc/lib/libifcore.so.5: undefined reference to
> `__csinq'
> /lsftmp/g0306332/inter/fc/lib/libifcore.so.5: undefined reference to
> `__ldexpq'
> /lsftmp/g0306332/inter/fc/lib/libifcore.so.5: undefined reference to
> `__frexpq'
>
>
> So what is wrong? I am a novice in mpi so I hope someone can give some
> advice. Thank you.
>
> PS: Has the "make clean" on MPICH been updated on
> BuildSystem/config/packages/MPI.py?
>
>
> On 1/13/07, Ben Tay <zonexo at gmail.com> wrote:
> >
> > Thanks Barry & Aron.
> >
> > I've tried to install mpich2 on a scratch directory and it finished in a
> > short while.
> >
> >
> > On 1/13/07, Barry Smith <bsmith at mcs.anl.gov> wrote:
> > >
> > >
> > > Ben,
> > >
> > > This is partially our fault. We never run "make clean" on MPICH after
> > > the
> > > install so there are lots of .o and .a files lying around. I've updated
> > > BuildSystem/config/packages/MPI.py to run make clean after the install.
> > >
> > > Barry
> > >
> > > On Sat, 13 Jan 2007, Aron Ahmadia wrote:
> > >
> > > > Hi Ben,
> > > >
> > > > My PETSc install on an OS X machine requires about 343 MB of space,
> > > > about 209 MB of which is MPICH. Unfortunately this has the potential
> > > > of exceeding 500 MB temporarily I believe as the make process
> > > > generates a lot of object files during the software build.
> > > >
> > > > I think what you want to do is compile and install your own copy of
> > > > MPICH (using a scratch directory or whatever tools you have at your
> > > > disposal), then use the --with-mpi-dir=/location/to/mpich/install
> > > > argument into configure.
> > > >
> > > > I've never staged a PETSc build on a machine with an extremely limited
> > > > quota, the developers might have some suggestions on how to do this.
> > > >
> > > > ~A
> > > >
> > > > On 1/13/07, Ben Tay <zonexo at gmail.com> wrote:
> > > > > Hi,
> > > > >
> > > > > I am trying to compile PETSc with mpi using --download-mpich=1 in
> > > linux. The
> > > > > command is
> > > > >
> > > > >
> > > > >
> > > > > ./config/configure.py
> > > > > --with-fc=/lsftmp/g0306332/inter/fc/bin/ifort
> > > > > --with-blas-lapack-dir=/lsftmp/g0306332/inter/mkl/
> > > > > --download-mpich=1 --with-x=0 --with-shared
> > > > >
> > > > > It displays:
> > > > >
> > > > >
> > >
> > =================================================================================
> > > > > Running configure on MPICH; this may take several
> > > > > minutes
> > > > >
> > >
> > =================================================================================
> > > > >
> > >
> > =================================================================================
> > > > > Running make on MPICH; this may take several
> > > > > minutes
> > > > >
> > >
> > =================================================================================
> > > > >
> > > > > then it says disk quota exceeded. I've about 450mb free space, which
> > > is all
> > > > > filled up when the error shows. May I know how much disk space is
> > > required?
> > > > >
> > > > > Also can I compile just mpich on a scratch directory and then moved
> > > it to
> > > > > the PETSc externalpackages directory? Or do I have to compile
> > > everything
> > > > > (including PETSc) on a scratch directory and moved it my my
> > > directory?
> > > > >
> > > > > thank you.
> > > >
> > > >
> > >
> > >
> >
>
More information about the petsc-users
mailing list