<HTML><BODY style="word-wrap: break-word; -khtml-nbsp-mode: space; -khtml-line-break: after-white-space; ">Hi,<DIV><BR class="khtml-block-placeholder"></DIV><DIV>Maybe there is a problem with the output-to-screen buffer. A not very uncommon problem if you use a job-scheduler. Try to add unbuffer (if it is available in your system) in your script file and see if it improves.</DIV><DIV><BR class="khtml-block-placeholder"></DIV><DIV>More information regarding unbuffer can be found here <A href="http://expect.nist.gov/example/unbuffer.man.html">http://expect.nist.gov/example/unbuffer.man.html</A></DIV><DIV><BR class="khtml-block-placeholder"></DIV><DIV>With best regards, Shaman Mahmoudi</DIV><DIV><BR><DIV><DIV>On Jan 16, 2007, at 8:56 AM, Ben Tay wrote:</DIV><BR class="Apple-interchange-newline"><BLOCKQUOTE type="cite"><DIV>Hi Pan,</DIV> <DIV> </DIV> <DIV>I also got very big library files if I use PETSc with mpich2. </DIV> <DIV> </DIV> <DIV>Btw, I have tried several options but I still don't understand why I can't get mpi to work with PETSc.</DIV> <DIV> </DIV> <DIV>The 4 processors are running together but each running its own code. </DIV> <DIV> </DIV> <DIV>I just use</DIV> <DIV> </DIV> <DIV><P>integer :: nprocs,rank,ierr</P><P> call PetscInitialize(PETSC_NULL_CHARACTER,ierr)</P><P> call MPI_Comm_rank(PETSC_COMM_WORLD,rank,ierr)<BR> <BR> call MPI_Comm_size(PETSC_COMM_WORLD,nprocs,ierr)</P><P> print *, rank, nprocs</P><P> call PetscFinalize(ierr)</P><DIV> <BR class="khtml-block-placeholder"></DIV><P>The answers I get is 0,1 repeated 4 times instead of 0,4 1,4 2,4 3,4.</P><P>I'm using my school's server's mpich and it work if I just compile in pure mpi.</P><P>Btw, if I need to send the job to 4 processors, I need to use a script file:</P><P>#BSUB -o std-output<BR>#BSUB -q linux_parallel_test<BR>#BSUB -n 4<BR>/usr/lsf6/bin/mpijob_gm /opt/mpich/myrinet/intel/bin/mpirun a.out</P><P>I wonder if the problem lies here...</P><DIV> <BR class="khtml-block-placeholder"></DIV><P>Thank you.</P></DIV> <DIV><BR><BR> </DIV> <DIV><SPAN class="gmail_quote">On 1/16/07, <B class="gmail_sendername">li pan</B> <<A href="mailto:li76pan@yahoo.com">li76pan@yahoo.com</A>> wrote:</SPAN> <BLOCKQUOTE class="gmail_quote" style="PADDING-LEFT: 1ex; MARGIN: 0px 0px 0px 0.8ex; BORDER-LEFT: #ccc 1px solid">I did try to download and install petsc2.3.2, but end<BR>up with error: mpich can not be download & installed, <BR>please install mpich for windows manually.<BR>In the homepage of mpich2, I didn't choose the version<BR>for windows but the source code version. And compiled<BR>it by myself. Then, I gave the --with-mpi-dir="install <BR>dir". Petsc was configured, and now it's doing "make".<BR>One interesting thing is, I installed petsc in linux<BR>before. The mpi libraries were very large<BR>(libmpich.a==60 mb). But this time in cygwin it was <BR>only several mbs.<BR><BR>best<BR><BR>pan<BR><BR><BR>--- Ben Tay <<A href="mailto:zonexo@gmail.com">zonexo@gmail.com</A>> wrote:<BR><BR>> hi,<BR>><BR>> i install PETSc using the following command:<BR>> <BR>> ./config/configure.py --with-vendor-compilers=intel<BR>> --with-gnu-compilers=0<BR>><BR>--with-blas-lapack-dir=/lsftmp/g0306332/inter/mkl/lib/32<BR>> --with-mpi-dir=/opt/mpich/intel/ --with-x=0<BR>> --with-shared <BR>><BR>> then i got:<BR>><BR>> Compilers:<BR>><BR>> C Compiler: /opt/mpich/intel/bin/mpicc<BR>> -fPIC -g<BR>> Fortran Compiler: /opt/mpich/intel/bin/mpif90<BR>> -I. -fPIC -g -w90 -w <BR>> Linkers:<BR>> Shared linker: /opt/mpich/intel/bin/mpicc<BR>> -shared -fPIC -g<BR>> Dynamic linker: /opt/mpich/intel/bin/mpicc<BR>> -shared -fPIC -g<BR>> PETSc:<BR>> PETSC_ARCH: linux-mpif90 <BR>> PETSC_DIR: /nas/lsftmp/g0306332/petsc-2.3.2-p8<BR>> **<BR>> ** Now build and test the libraries with "make all<BR>> test"<BR>> **<BR>> Clanguage: C<BR>> Scalar type:real<BR> > MPI:<BR>> Includes: ['/opt/mpich/intel/include']<BR>> PETSc shared libraries: enabled<BR>> PETSc dynamic libraries: disabled<BR>> BLAS/LAPACK:<BR>> -Wl,-rpath,/lsftmp/g0306332/inter/mkl/lib/32 <BR>> -L/lsftmp/g0306332/inter/mkl/lib/32 -lmkl_lapack<BR>> -lmkl_ia32 -lguide<BR>><BR>> i ran "make all test" and everything seems fine<BR>><BR>> /opt/mpich/intel/bin/mpicc -c -fPIC -g<BR>> <BR>-I/nas/lsftmp/g0306332/petsc-2.3.2-p8-I/nas/lsftmp/g0306332/petsc-<BR>> 2.3.2-p8/bmake/linux-mpif90<BR>> -I/nas/lsftmp/g0306332/petsc-2.3.2-p8/include<BR>> -I/opt/mpich/intel/include<BR>> -D__SDIR__="src/snes/examples/tutorials/" ex19.c<BR>> /opt/mpich/intel/bin/mpicc -fPIC -g -o ex19<BR>> ex19.o-Wl,-rpath,/nas/lsftmp/g0306332/petsc-<BR>> 2.3.2-p8/lib/linux-mpif90<BR>><BR>-L/nas/lsftmp/g0306332/petsc-2.3.2-p8/lib/linux-mpif90<BR>> -lpetscsnes -lpetscksp -lpetscdm -lpetscmat <BR>> -lpetscvec -lpetsc<BR>> -Wl,-rpath,/lsftmp/g0306332/inter/mkl/lib/32<BR>> -L/lsftmp/g0306332/inter/mkl/lib/32 -lmkl_lapack<BR>> -lmkl_ia32 -lguide<BR>> -lPEPCF90 -Wl,-rpath,/opt/intel/compiler70/ia32/lib <BR>> -Wl,-rpath,/opt/mpich/intel/lib<BR>> -L/opt/mpich/intel/lib -Wl,-rpath,-rpath<BR>> -Wl,-rpath,-ldl -L-ldl -lmpich<BR>> -Wl,-rpath,/opt/intel/compiler70/ia32/lib<BR>> -Wl,-rpath,/opt/intel/compiler70/ia32/lib <BR>> -L/opt/intel/compiler70/ia32/lib<BR>> -Wl,-rpath,/usr/lib -Wl,-rpath,/usr/lib -L/usr/lib<BR>> -limf -lirc -lcprts -lcxa<BR>> -lunwind -ldl -lmpichf90 -lPEPCF90<BR>> -Wl,-rpath,/opt/intel/compiler70/ia32/lib <BR>> -L/opt/intel/compiler70/ia32/lib -Wl,-rpath,/usr/lib<BR>> -L/usr/lib -lintrins<BR>> -lIEPCF90 -lF90 -lm -Wl,-rpath,\ -Wl,-rpath,\ -L\<BR>> -ldl -lmpich<BR>> -Wl,-rpath,/opt/intel/compiler70/ia32/lib<BR> > -L/opt/intel/compiler70/ia32/lib<BR>> -Wl,-rpath,/usr/lib -L/usr/lib -limf -lirc -lcprts<BR>> -lcxa -lunwind -ldl<BR>> /bin/rm -f ex19.o<BR>> C/C++ example src/snes/examples/tutorials/ex19 run<BR>> successfully with 1 MPI <BR>> process<BR>> C/C++ example src/snes/examples/tutorials/ex19 run<BR>> successfully with 2 MPI<BR>> processes<BR>> Fortran example src/snes/examples/tutorials/ex5f run<BR>> successfully with 1 MPI<BR> > process<BR>> Completed test examples<BR>><BR>> I then tried to run my own parallel code. It's a<BR>> simple code which prints<BR>> the rank of each processor.<BR>><BR>> If I compile the code using just mpif90 test.F(using<BR>> just mpif.h)<BR>><BR>> I get 0,1,2,3 (4 processors).<BR>><BR>> however, if i change the code to use petsc.h etc<BR>> ie.<BR>><BR>><BR>> program ns2d_c<BR>><BR>> implicit none <BR>><BR>><BR>> #include "include/finclude/petsc.h"<BR>> #include "include/finclude/petscvec.h"<BR>> #include "include/finclude/petscmat.h"<BR>> #include "include/finclude/petscksp.h" <BR>> #include "include/finclude/petscpc.h"<BR>> #include "include/finclude/petscmat.h90"<BR>><BR>> integer,parameter :: size_x=8,size_y=4<BR>><BR>> integer ::<BR>> ierr,Istart_p,Iend_p,Ntot,Istart_m,Iend_m,k <BR>><BR>> PetscMPIInt nprocs,rank<BR>><BR>><BR>><BR>><BR>> call PetscInitialize(PETSC_NULL_CHARACTER,ierr)<BR>><BR>> call<BR>> MPI_Comm_rank(PETSC_COMM_WORLD,rank,ierr)<BR>><BR> > call<BR>> MPI_Comm_size(PETSC_COMM_WORLD,nprocs,ierr)<BR>><BR>> end program ns2d_c<BR>><BR>><BR>><BR>> i then rename the filename to ex2f.F and use "make<BR>> ex2f"<BR>><BR> > the result I get is something like 0,0,0,0.<BR>><BR>><BR>><BR>> Why is this so?<BR>><BR>> Thank you.<BR>><BR><BR><BR><BR><BR>____________________________________________________________________________________ <BR>Never Miss an Email<BR>Stay connected with Yahoo! Mail on your mobile. Get started!<BR><A href="http://mobile.yahoo.com/services?promote=mail">http://mobile.yahoo.com/services?promote=mail</A><BR><BR></BLOCKQUOTE></DIV> <BR></BLOCKQUOTE></DIV><BR></DIV></BODY></HTML>