<div>Thanks Satish! I'm really careless. My school has got the myrinet and intel mpi and I'm using the wrong one. I've got it working now.</div>
<div> </div>
<div>Thanks again!<br><br> </div>
<div><span class="gmail_quote">On 1/18/07, <b class="gmail_sendername">Satish Balay</b> <<a href="mailto:balay@mcs.anl.gov">balay@mcs.anl.gov</a>> wrote:</span>
<blockquote class="gmail_quote" style="PADDING-LEFT: 1ex; MARGIN: 0px 0px 0px 0.8ex; BORDER-LEFT: #ccc 1px solid">You've built PETSc with the following.<br><br>> > > > > > > --with-mpi-dir=/opt/mpich/intel/ --with-x=0
<br><br>However you are comparing simple MPI test with a different MPI [from<br>/opt/mpich/myrinet/intel]<br><br>> > > > > /usr/lsf6/bin/mpijob_gm /opt/mpich/myrinet/intel/bin/mpirun a.out<br><br>You are using different MPI imps for each of these cases - hence the
<br>results are different.<br><br>I guess you should be using --with-mpi-dir=/opt/mpich/myrinet/intel<br>with PETSc configure.<br><br>If you still encounter problems - please send us the COMPLETE info wrt<br>the 2 tests.<br>
<br>i.e compile, run , output , with the location of compilers used [which mpif90]<br><br>Satish<br><br>On Wed, 17 Jan 2007, Ben Tay wrote:<br><br>> Hi,<br>><br>> My school's server limit to minimum 4 processors. The output is
<br>><br>> Vector length 20<br>> Vector length 20<br>> Vector length 20 40 60<br>> All other values should be near zero<br>> VecScale 0<br>> VecCopy 0<br>> VecAXPY 0<br>> VecAYPX 0<br>> VecSwap 0
<br>> VecSwap 0<br>> VecWAXPY 0<br>> VecPointwiseMult 0<br>> VecPointwiseDivide 0<br>> VecMAXPY 0 0 0<br>> Vector length 20 40 60<br>> All other values should be near zero<br>> VecScale 0<br>> VecCopy 0
<br>> VecAXPY 0<br>> VecAYPX 0<br>> VecSwap 0<br>> VecSwap 0<br>> VecWAXPY 0<br>> VecPointwiseMult 0<br>> VecPointwiseDivide 0<br>> VecMAXPY 0 0 0<br>> Vector length 20<br>> Vector length 20 40 60
<br>> All other values should be near zero<br>> Vector length 20<br>> Vector length 20 40 60<br>> All other values should be near zero<br>> VecScale 0<br>> VecCopy 0<br>> VecAXPY 0<br>> VecAYPX 0<br>
> VecSwap 0<br>> VecSwap 0<br>> VecWAXPY 0<br>> VecPointwiseMult 0<br>> VecPointwiseDivide 0<br>> VecMAXPY 0 0 0<br>> VecScale 0<br>> VecCopy 0<br>> VecAXPY 0<br>> VecAYPX 0<br>> VecSwap 0
<br>> VecSwap 0<br>> VecWAXPY 0<br>> VecPointwiseMult 0<br>> VecPointwiseDivide 0<br>> VecMAXPY 0 0 0<br>><br>> So what's the verdict?<br>><br>> Thank you.<br>><br>><br>> On 1/17/07, Barry Smith <
<a href="mailto:bsmith@mcs.anl.gov">bsmith@mcs.anl.gov</a>> wrote:<br>> ><br>> ><br>> > Ben,<br>> ><br>> > I don't know what to say; what you report is inherently contradictory.<br>> > What happens when you run src/vec/vec/examples/tutorials/ex1.c on 2
<br>> > processors?<br>> ><br>> > Barry<br>> ><br>> ><br>> > On Wed, 17 Jan 2007, Ben Tay wrote:<br>> ><br>> > > Thanks Shaman. But the problem is that I get<br>> > >
<br>> > > 0,1<br>> > > 0,1<br>> > > 0,1<br>> > > 0,1<br>> > ><br>> > > instead of<br>> > ><br>> > > 0,4<br>> > > 1,4<br>> > > 2,4<br>
> > > 3,4 which means there are 4 processors instead of the above, whereby it<br>> > > seems that 4 serial jobs are running.<br>> > ><br>> > > Barry:<br>> > ><br>> > > The script was given by my school when parallel jobs are to be
<br>> > submitted. I<br>> > > use the same script when submitting pure mpi job and it works. On the<br>> > other<br>> > > hand, the PETSc parallel code ran successfully on another of my school's
<br>> > > server. It was also submitted using a script, but a slightly different<br>> > one<br>> > > since it's another system. However, that server is very busy hence I<br>> > usually<br>
> > > use the current server.<br>> > ><br>> > > Do you have other other solution? Or should I try other ways of<br>> > compilation?<br>> > > Btw, I am using ifc 7.0 and icc 7.0. The codes are written in fortran.
<br>> > ><br>> > > Thank you.<br>> > ><br>> > ><br>> > > On 1/16/07, Barry Smith <<a href="mailto:bsmith@mcs.anl.gov">bsmith@mcs.anl.gov</a>> wrote:<br>> > > >
<br>> > > ><br>> > > > Ben,<br>> > > ><br>> > > > You definitely have to submit a PETSc job just like<br>> > > > any other MPI job. So please try using the script.
<br>> > > ><br>> > > > Barry<br>> > > ><br>> > > ><br>> > > > On Tue, 16 Jan 2007, Ben Tay wrote:<br>> > > ><br>> > > > > Hi Pan,<br>
> > > > ><br>> > > > > I also got very big library files if I use PETSc with mpich2.<br>> > > > ><br>> > > > > Btw, I have tried several options but I still don't understand why I
<br>> > > > can't<br>> > > > > get mpi to work with PETSc.<br>> > > > ><br>> > > > > The 4 processors are running together but each running its own code.<br>> > > > >
<br>> > > > > I just use<br>> > > > ><br>> > > > ><br>> > > > > integer :: nprocs,rank,ierr<br>> > > > ><br>> > > > > call PetscInitialize(PETSC_NULL_CHARACTER,ierr)
<br>> > > > ><br>> > > > > call MPI_Comm_rank(PETSC_COMM_WORLD,rank,ierr)<br>> > > > ><br>> > > > > call MPI_Comm_size(PETSC_COMM_WORLD,nprocs,ierr)<br>> > > > >
<br>> > > > > print *, rank, nprocs<br>> > > > ><br>> > > > > call PetscFinalize(ierr)<br>> > > > ><br>> > > > ><br>> > > > ><br>> > > > > The answers I get is 0,1 repeated 4 times instead of 0,4 1,4 2,4
<br>> > 3,4.<br>> > > > ><br>> > > > > I'm using my school's server's mpich and it work if I just compile<br>> > in<br>> > > > pure<br>> > > > > mpi.
<br>> > > > ><br>> > > > > Btw, if I need to send the job to 4 processors, I need to use a<br>> > script<br>> > > > file:<br>> > > > ><br>> > > > > #BSUB -o std-output
<br>> > > > > #BSUB -q linux_parallel_test<br>> > > > > #BSUB -n 4<br>> > > > > /usr/lsf6/bin/mpijob_gm /opt/mpich/myrinet/intel/bin/mpirun a.out<br>> > > > ><br>
> > > > > I wonder if the problem lies here...<br>> > > > ><br>> > > > ><br>> > > > ><br>> > > > > Thank you.<br>> > > > ><br>> > > > >
<br>> > > > ><br>> > > > > On 1/16/07, li pan <<a href="mailto:li76pan@yahoo.com">li76pan@yahoo.com</a>> wrote:<br>> > > > > ><br>> > > > > > I did try to download and install
petsc2.3.2, but end<br>> > > > > > up with error: mpich can not be download & installed,<br>> > > > > > please install mpich for windows manually.<br>> > > > > > In the homepage of mpich2, I didn't choose the version
<br>> > > > > > for windows but the source code version. And compiled<br>> > > > > > it by myself. Then, I gave the --with-mpi-dir="install<br>> > > > > > dir". Petsc was configured, and now it's doing "make".
<br>> > > > > > One interesting thing is, I installed petsc in linux<br>> > > > > > before. The mpi libraries were very large<br>> > > > > > (libmpich.a==60 mb). But this time in cygwin it was
<br>> > > > > > only several mbs.<br>> > > > > ><br>> > > > > > best<br>> > > > > ><br>> > > > > > pan<br>> > > > > >
<br>> > > > > ><br>> > > > > > --- Ben Tay <<a href="mailto:zonexo@gmail.com">zonexo@gmail.com</a>> wrote:<br>> > > > > ><br>> > > > > > > hi,
<br>> > > > > > ><br>> > > > > > > i install PETSc using the following command:<br>> > > > > > ><br>> > > > > > > ./config/configure.py --with-vendor-compilers=intel
<br>> > > > > > > --with-gnu-compilers=0<br>> > > > > > ><br>> > > > > > --with-blas-lapack-dir=/lsftmp/g0306332/inter/mkl/lib/32<br>> > > > > > > --with-mpi-dir=/opt/mpich/intel/ --with-x=0
<br>> > > > > > > --with-shared<br>> > > > > > ><br>> > > > > > > then i got:<br>> > > > > > ><br>> > > > > > > Compilers:
<br>> > > > > > ><br>> > > > > > > C Compiler: /opt/mpich/intel/bin/mpicc<br>> > > > > > > -fPIC -g<br>> > > > > > > Fortran Compiler: /opt/mpich/intel/bin/mpif90
<br>> > > > > > > -I. -fPIC -g -w90 -w<br>> > > > > > > Linkers:<br>> > > > > > > Shared linker: /opt/mpich/intel/bin/mpicc<br>> > > > > > > -shared -fPIC -g
<br>> > > > > > > Dynamic linker: /opt/mpich/intel/bin/mpicc<br>> > > > > > > -shared -fPIC -g<br>> > > > > > > PETSc:<br>> > > > > > > PETSC_ARCH: linux-mpif90
<br>> > > > > > > PETSC_DIR: /nas/lsftmp/g0306332/petsc-2.3.2-p8<br>> > > > > > > **<br>> > > > > > > ** Now build and test the libraries with "make all
<br>> > > > > > > test"<br>> > > > > > > **<br>> > > > > > > Clanguage: C<br>> > > > > > > Scalar type:real<br>> > > > > > > MPI:
<br>> > > > > > > Includes: ['/opt/mpich/intel/include']<br>> > > > > > > PETSc shared libraries: enabled<br>> > > > > > > PETSc dynamic libraries: disabled
<br>> > > > > > > BLAS/LAPACK:<br>> > > > > > > -Wl,-rpath,/lsftmp/g0306332/inter/mkl/lib/32<br>> > > > > > > -L/lsftmp/g0306332/inter/mkl/lib/32 -lmkl_lapack
<br>> > > > > > > -lmkl_ia32 -lguide<br>> > > > > > ><br>> > > > > > > i ran "make all test" and everything seems fine<br>> > > > > > >
<br>> > > > > > > /opt/mpich/intel/bin/mpicc -c -fPIC -g<br>> > > > > > ><br>> > > > > > -I/nas/lsftmp/g0306332/petsc-2.3.2-p8-I/nas/lsftmp/g0306332/petsc-<br>> > > > > > >
2.3.2-p8/bmake/linux-mpif90<br>> > > > > > > -I/nas/lsftmp/g0306332/petsc-2.3.2-p8/include<br>> > > > > > > -I/opt/mpich/intel/include<br>> > > > > > > -D__SDIR__="src/snes/examples/tutorials/"
ex19.c<br>> > > > > > > /opt/mpich/intel/bin/mpicc -fPIC -g -o ex19<br>> > > > > > > ex19.o-Wl,-rpath,/nas/lsftmp/g0306332/petsc-<br>> > > > > > > 2.3.2-p8/lib/linux-mpif90
<br>> > > > > > ><br>> > > > > > -L/nas/lsftmp/g0306332/petsc-2.3.2-p8/lib/linux-mpif90<br>> > > > > > > -lpetscsnes -lpetscksp -lpetscdm -lpetscmat<br>> > > > > > > -lpetscvec -lpetsc
<br>> > > > > > > -Wl,-rpath,/lsftmp/g0306332/inter/mkl/lib/32<br>> > > > > > > -L/lsftmp/g0306332/inter/mkl/lib/32 -lmkl_lapack<br>> > > > > > > -lmkl_ia32 -lguide
<br>> > > > > > > -lPEPCF90 -Wl,-rpath,/opt/intel/compiler70/ia32/lib<br>> > > > > > > -Wl,-rpath,/opt/mpich/intel/lib<br>> > > > > > > -L/opt/mpich/intel/lib -Wl,-rpath,-rpath
<br>> > > > > > > -Wl,-rpath,-ldl -L-ldl -lmpich<br>> > > > > > > -Wl,-rpath,/opt/intel/compiler70/ia32/lib<br>> > > > > > > -Wl,-rpath,/opt/intel/compiler70/ia32/lib
<br>> > > > > > > -L/opt/intel/compiler70/ia32/lib<br>> > > > > > > -Wl,-rpath,/usr/lib -Wl,-rpath,/usr/lib -L/usr/lib<br>> > > > > > > -limf -lirc -lcprts -lcxa
<br>> > > > > > > -lunwind -ldl -lmpichf90 -lPEPCF90<br>> > > > > > > -Wl,-rpath,/opt/intel/compiler70/ia32/lib<br>> > > > > > > -L/opt/intel/compiler70/ia32/lib -Wl,-rpath,/usr/lib
<br>> > > > > > > -L/usr/lib -lintrins<br>> > > > > > > -lIEPCF90 -lF90 -lm -Wl,-rpath,\ -Wl,-rpath,\ -L\<br>> > > > > > > -ldl -lmpich<br>> > > > > > > -Wl,-rpath,/opt/intel/compiler70/ia32/lib
<br>> > > > > > > -L/opt/intel/compiler70/ia32/lib<br>> > > > > > > -Wl,-rpath,/usr/lib -L/usr/lib -limf -lirc -lcprts<br>> > > > > > > -lcxa -lunwind -ldl<br>
> > > > > > > /bin/rm -f ex19.o<br>> > > > > > > C/C++ example src/snes/examples/tutorials/ex19 run<br>> > > > > > > successfully with 1 MPI<br>> > > > > > > process
<br>> > > > > > > C/C++ example src/snes/examples/tutorials/ex19 run<br>> > > > > > > successfully with 2 MPI<br>> > > > > > > processes<br>> > > > > > > Fortran example src/snes/examples/tutorials/ex5f run
<br>> > > > > > > successfully with 1 MPI<br>> > > > > > > process<br>> > > > > > > Completed test examples<br>> > > > > > ><br>> > > > > > > I then tried to run my own parallel code. It's a
<br>> > > > > > > simple code which prints<br>> > > > > > > the rank of each processor.<br>> > > > > > ><br>> > > > > > > If I compile the code using just mpif90
test.F(using<br>> > > > > > > just mpif.h)<br>> > > > > > ><br>> > > > > > > I get 0,1,2,3 (4 processors).<br>> > > > > > ><br>> > > > > > > however, if i change the code to use
petsc.h etc<br>> > > > > > > ie.<br>> > > > > > ><br>> > > > > > ><br>> > > > > > > program ns2d_c<br>> > > > > > >
<br>> > > > > > > implicit none<br>> > > > > > ><br>> > > > > > ><br>> > > > > > > #include "include/finclude/petsc.h"<br>
> > > > > > > #include "include/finclude/petscvec.h"<br>> > > > > > > #include "include/finclude/petscmat.h"<br>> > > > > > > #include "include/finclude/petscksp.h"
<br>> > > > > > > #include "include/finclude/petscpc.h"<br>> > > > > > > #include "include/finclude/petscmat.h90"<br>> > > > > > ><br>> > > > > > > integer,parameter :: size_x=8,size_y=4
<br>> > > > > > ><br>> > > > > > > integer ::<br>> > > > > > > ierr,Istart_p,Iend_p,Ntot,Istart_m,Iend_m,k<br>> > > > > > ><br>> > > > > > > PetscMPIInt nprocs,rank
<br>> > > > > > ><br>> > > > > > ><br>> > > > > > ><br>> > > > > > ><br>> > > > > > > call PetscInitialize(PETSC_NULL_CHARACTER,ierr)
<br>> > > > > > ><br>> > > > > > > call<br>> > > > > > > MPI_Comm_rank(PETSC_COMM_WORLD,rank,ierr)<br>> > > > > > ><br>> > > > > > > call
<br>> > > > > > > MPI_Comm_size(PETSC_COMM_WORLD,nprocs,ierr)<br>> > > > > > ><br>> > > > > > > end program ns2d_c<br>> > > > > > ><br>> > > > > > >
<br>> > > > > > ><br>> > > > > > > i then rename the filename to ex2f.F and use "make<br>> > > > > > > ex2f"<br>> > > > > > ><br>
> > > > > > > the result I get is something like 0,0,0,0.<br>> > > > > > ><br>> > > > > > ><br>> > > > > > ><br>> > > > > > > Why is this so?
<br>> > > > > > ><br>> > > > > > > Thank you.<br>> > > > > > ><br>> > > > > ><br>> > > > > ><br>> > > > > >
<br>> > > > > ><br>> > > > > ><br>> > > > > ><br>> > > ><br>> > ____________________________________________________________________________________<br>
> > > > > > Never Miss an Email<br>> > > > > > Stay connected with Yahoo! Mail on your mobile. Get started!<br>> > > > > > <a href="http://mobile.yahoo.com/services?promote=mail">
http://mobile.yahoo.com/services?promote=mail</a><br>> > > > > ><br>> > > > > ><br>> > > > ><br>> > > ><br>> > > ><br>> > ><br>> ><br>
> ><br>><br><br></blockquote></div><br>