<HTML><BODY style="word-wrap: break-word; -khtml-nbsp-mode: space; -khtml-line-break: after-white-space; ">Hi,<DIV><BR class="khtml-block-placeholder"></DIV><DIV>Maybe there is a problem with the output-to-screen buffer. A not very uncommon problem if you use a job-scheduler. Try to add unbuffer (if it is available in your system) in your script file and see if it improves.</DIV><DIV><BR class="khtml-block-placeholder"></DIV><DIV>More information regarding unbuffer can be found here <A href="http://expect.nist.gov/example/unbuffer.man.html">http://expect.nist.gov/example/unbuffer.man.html</A></DIV><DIV><BR class="khtml-block-placeholder"></DIV><DIV>With best regards, Shaman Mahmoudi</DIV><DIV><BR><DIV><DIV>On Jan 16, 2007, at 8:56 AM, Ben Tay wrote:</DIV><BR class="Apple-interchange-newline"><BLOCKQUOTE type="cite"><DIV>Hi Pan,</DIV> <DIV> </DIV> <DIV>I also got very big library files if I use PETSc with mpich2. </DIV> <DIV> </DIV> <DIV>Btw, I have tried several options but I still don't understand why I can't get mpi to work with PETSc.</DIV> <DIV> </DIV> <DIV>The 4 processors are running together but each running its own code. </DIV> <DIV> </DIV> <DIV>I just use</DIV> <DIV> </DIV> <DIV><P>integer :: nprocs,rank,ierr</P><P> call PetscInitialize(PETSC_NULL_CHARACTER,ierr)</P><P> call MPI_Comm_rank(PETSC_COMM_WORLD,rank,ierr)<BR>      <BR> call MPI_Comm_size(PETSC_COMM_WORLD,nprocs,ierr)</P><P> print *, rank, nprocs</P><P> call PetscFinalize(ierr)</P><DIV> <BR class="khtml-block-placeholder"></DIV><P>The answers I get is 0,1 repeated 4 times instead of 0,4 1,4 2,4 3,4.</P><P>I'm using my school's server's mpich and it work if I just compile in pure mpi.</P><P>Btw, if I need to send the job to 4 processors, I need to use a script file:</P><P>#BSUB -o std-output<BR>#BSUB -q linux_parallel_test<BR>#BSUB -n 4<BR>/usr/lsf6/bin/mpijob_gm /opt/mpich/myrinet/intel/bin/mpirun a.out</P><P>I wonder if the problem lies here...</P><DIV> <BR class="khtml-block-placeholder"></DIV><P>Thank you.</P></DIV> <DIV><BR><BR> </DIV> <DIV><SPAN class="gmail_quote">On 1/16/07, <B class="gmail_sendername">li pan</B> &lt;<A href="mailto:li76pan@yahoo.com">li76pan@yahoo.com</A>&gt; wrote:</SPAN> <BLOCKQUOTE class="gmail_quote" style="PADDING-LEFT: 1ex; MARGIN: 0px 0px 0px 0.8ex; BORDER-LEFT: #ccc 1px solid">I did try to download and install petsc2.3.2, but end<BR>up with error: mpich can not be download &amp; installed, <BR>please install mpich for windows manually.<BR>In the homepage of mpich2, I didn't choose the version<BR>for windows but the source code version. And compiled<BR>it by myself. Then, I gave the --with-mpi-dir="install <BR>dir". Petsc was configured, and now it's doing "make".<BR>One interesting thing is, I installed petsc in linux<BR>before. The mpi libraries were very large<BR>(libmpich.a==60 mb). But this time in cygwin it was <BR>only several mbs.<BR><BR>best<BR><BR>pan<BR><BR><BR>--- Ben Tay &lt;<A href="mailto:zonexo@gmail.com">zonexo@gmail.com</A>&gt; wrote:<BR><BR>&gt; hi,<BR>&gt;<BR>&gt; i install PETSc using the following command:<BR>&gt; <BR>&gt; ./config/configure.py --with-vendor-compilers=intel<BR>&gt; --with-gnu-compilers=0<BR>&gt;<BR>--with-blas-lapack-dir=/lsftmp/g0306332/inter/mkl/lib/32<BR>&gt; --with-mpi-dir=/opt/mpich/intel/ --with-x=0<BR>&gt; --with-shared <BR>&gt;<BR>&gt; then i got:<BR>&gt;<BR>&gt; Compilers:<BR>&gt;<BR>&gt;   C Compiler:         /opt/mpich/intel/bin/mpicc<BR>&gt; -fPIC -g<BR>&gt;   Fortran Compiler:   /opt/mpich/intel/bin/mpif90<BR>&gt; -I. -fPIC -g -w90 -w <BR>&gt; Linkers:<BR>&gt;   Shared linker:   /opt/mpich/intel/bin/mpicc<BR>&gt; -shared  -fPIC -g<BR>&gt;   Dynamic linker:   /opt/mpich/intel/bin/mpicc<BR>&gt; -shared  -fPIC -g<BR>&gt; PETSc:<BR>&gt;   PETSC_ARCH: linux-mpif90 <BR>&gt;   PETSC_DIR: /nas/lsftmp/g0306332/petsc-2.3.2-p8<BR>&gt;   **<BR>&gt;   ** Now build and test the libraries with "make all<BR>&gt; test"<BR>&gt;   **<BR>&gt;   Clanguage: C<BR>&gt;   Scalar type:real<BR> &gt; MPI:<BR>&gt;   Includes: ['/opt/mpich/intel/include']<BR>&gt;   PETSc shared libraries: enabled<BR>&gt;   PETSc dynamic libraries: disabled<BR>&gt; BLAS/LAPACK:<BR>&gt; -Wl,-rpath,/lsftmp/g0306332/inter/mkl/lib/32 <BR>&gt; -L/lsftmp/g0306332/inter/mkl/lib/32 -lmkl_lapack<BR>&gt; -lmkl_ia32 -lguide<BR>&gt;<BR>&gt; i ran "make all test" and everything seems fine<BR>&gt;<BR>&gt; /opt/mpich/intel/bin/mpicc -c -fPIC -g<BR>&gt; <BR>-I/nas/lsftmp/g0306332/petsc-2.3.2-p8-I/nas/lsftmp/g0306332/petsc-<BR>&gt; 2.3.2-p8/bmake/linux-mpif90<BR>&gt; -I/nas/lsftmp/g0306332/petsc-2.3.2-p8/include<BR>&gt; -I/opt/mpich/intel/include<BR>&gt; -D__SDIR__="src/snes/examples/tutorials/" ex19.c<BR>&gt; /opt/mpich/intel/bin/mpicc -fPIC -g  -o ex19<BR>&gt; ex19.o-Wl,-rpath,/nas/lsftmp/g0306332/petsc-<BR>&gt; 2.3.2-p8/lib/linux-mpif90<BR>&gt;<BR>-L/nas/lsftmp/g0306332/petsc-2.3.2-p8/lib/linux-mpif90<BR>&gt; -lpetscsnes -lpetscksp -lpetscdm -lpetscmat <BR>&gt; -lpetscvec -lpetsc<BR>&gt; -Wl,-rpath,/lsftmp/g0306332/inter/mkl/lib/32<BR>&gt; -L/lsftmp/g0306332/inter/mkl/lib/32 -lmkl_lapack<BR>&gt; -lmkl_ia32 -lguide<BR>&gt; -lPEPCF90 -Wl,-rpath,/opt/intel/compiler70/ia32/lib <BR>&gt; -Wl,-rpath,/opt/mpich/intel/lib<BR>&gt; -L/opt/mpich/intel/lib -Wl,-rpath,-rpath<BR>&gt; -Wl,-rpath,-ldl -L-ldl -lmpich<BR>&gt; -Wl,-rpath,/opt/intel/compiler70/ia32/lib<BR>&gt; -Wl,-rpath,/opt/intel/compiler70/ia32/lib <BR>&gt; -L/opt/intel/compiler70/ia32/lib<BR>&gt; -Wl,-rpath,/usr/lib -Wl,-rpath,/usr/lib -L/usr/lib<BR>&gt; -limf -lirc -lcprts -lcxa<BR>&gt; -lunwind -ldl -lmpichf90 -lPEPCF90<BR>&gt; -Wl,-rpath,/opt/intel/compiler70/ia32/lib <BR>&gt; -L/opt/intel/compiler70/ia32/lib -Wl,-rpath,/usr/lib<BR>&gt; -L/usr/lib -lintrins<BR>&gt; -lIEPCF90 -lF90 -lm  -Wl,-rpath,\ -Wl,-rpath,\ -L\<BR>&gt; -ldl -lmpich<BR>&gt; -Wl,-rpath,/opt/intel/compiler70/ia32/lib<BR> &gt; -L/opt/intel/compiler70/ia32/lib<BR>&gt; -Wl,-rpath,/usr/lib -L/usr/lib -limf -lirc -lcprts<BR>&gt; -lcxa -lunwind -ldl<BR>&gt; /bin/rm -f ex19.o<BR>&gt; C/C++ example src/snes/examples/tutorials/ex19 run<BR>&gt; successfully with 1 MPI <BR>&gt; process<BR>&gt; C/C++ example src/snes/examples/tutorials/ex19 run<BR>&gt; successfully with 2 MPI<BR>&gt; processes<BR>&gt; Fortran example src/snes/examples/tutorials/ex5f run<BR>&gt; successfully with 1 MPI<BR> &gt; process<BR>&gt; Completed test examples<BR>&gt;<BR>&gt; I then tried to run my own parallel code. It's a<BR>&gt; simple code which prints<BR>&gt; the rank of each processor.<BR>&gt;<BR>&gt; If I compile the code using just mpif90 test.F(using<BR>&gt; just mpif.h)<BR>&gt;<BR>&gt; I get 0,1,2,3 (4 processors).<BR>&gt;<BR>&gt; however, if i change the code to use petsc.h etc<BR>&gt; ie.<BR>&gt;<BR>&gt;<BR>&gt;         program ns2d_c<BR>&gt;<BR>&gt;         implicit none <BR>&gt;<BR>&gt;<BR>&gt; #include "include/finclude/petsc.h"<BR>&gt; #include "include/finclude/petscvec.h"<BR>&gt; #include "include/finclude/petscmat.h"<BR>&gt; #include "include/finclude/petscksp.h" <BR>&gt; #include "include/finclude/petscpc.h"<BR>&gt; #include "include/finclude/petscmat.h90"<BR>&gt;<BR>&gt;         integer,parameter :: size_x=8,size_y=4<BR>&gt;<BR>&gt;         integer ::<BR>&gt; ierr,Istart_p,Iend_p,Ntot,Istart_m,Iend_m,k <BR>&gt;<BR>&gt;  PetscMPIInt     nprocs,rank<BR>&gt;<BR>&gt;<BR>&gt;<BR>&gt;<BR>&gt; call PetscInitialize(PETSC_NULL_CHARACTER,ierr)<BR>&gt;<BR>&gt;         call<BR>&gt; MPI_Comm_rank(PETSC_COMM_WORLD,rank,ierr)<BR>&gt;<BR> &gt;         call<BR>&gt; MPI_Comm_size(PETSC_COMM_WORLD,nprocs,ierr)<BR>&gt;<BR>&gt; end program ns2d_c<BR>&gt;<BR>&gt;<BR>&gt;<BR>&gt; i then rename the filename to ex2f.F and use "make<BR>&gt; ex2f"<BR>&gt;<BR> &gt; the result I get is something like 0,0,0,0.<BR>&gt;<BR>&gt;<BR>&gt;<BR>&gt; Why is this so?<BR>&gt;<BR>&gt; Thank you.<BR>&gt;<BR><BR><BR><BR><BR>____________________________________________________________________________________ <BR>Never Miss an Email<BR>Stay connected with Yahoo! Mail on your mobile.  Get started!<BR><A href="http://mobile.yahoo.com/services?promote=mail">http://mobile.yahoo.com/services?promote=mail</A><BR><BR></BLOCKQUOTE></DIV> <BR></BLOCKQUOTE></DIV><BR></DIV></BODY></HTML>