<div>Thanks Shaman. But the problem is that I get</div>
<div>&nbsp;</div>
<div>0,1</div>
<div>0,1</div>
<div>0,1</div>
<div>0,1</div>
<div>&nbsp;</div>
<div>instead of </div>
<div>&nbsp;</div>
<div>0,4</div>
<div>1,4</div>
<div>2,4</div>
<div>3,4 which means there are 4 processors instead of the above, whereby it seems that 4 serial jobs are running.</div>
<div>&nbsp;</div>
<div>Barry:</div>
<div>&nbsp;</div>
<div>The script was given by my school when parallel jobs are to be submitted. I use the same script when submitting pure mpi job and it works. On the other hand, the PETSc parallel code ran successfully on another of my school&#39;s server. It was also submitted using a script, but a slightly different one since it&#39;s another system. However, that server is very busy hence I usually use the current server.
</div>
<div>&nbsp;</div>
<div>Do you have other other solution? Or should I try other ways of compilation? Btw, I am using ifc 7.0 and icc 7.0. The codes are written in fortran.</div>
<div>&nbsp;</div>
<div>Thank you.<br><br>&nbsp;</div>
<div><span class="gmail_quote">On 1/16/07, <b class="gmail_sendername">Barry Smith</b> &lt;<a href="mailto:bsmith@mcs.anl.gov">bsmith@mcs.anl.gov</a>&gt; wrote:</span>
<blockquote class="gmail_quote" style="PADDING-LEFT: 1ex; MARGIN: 0px 0px 0px 0.8ex; BORDER-LEFT: #ccc 1px solid"><br>Ben,<br><br>&nbsp;&nbsp;You definitely have to submit a PETSc job just like<br>any other MPI job. So please try using the script.
<br><br>&nbsp;&nbsp;Barry<br><br><br>On Tue, 16 Jan 2007, Ben Tay wrote:<br><br>&gt; Hi Pan,<br>&gt;<br>&gt; I also got very big library files if I use PETSc with mpich2.<br>&gt;<br>&gt; Btw, I have tried several options but I still don&#39;t understand why I can&#39;t
<br>&gt; get mpi to work with PETSc.<br>&gt;<br>&gt; The 4 processors are running together but each running its own code.<br>&gt;<br>&gt; I just use<br>&gt;<br>&gt;<br>&gt; integer :: nprocs,rank,ierr<br>&gt;<br>&gt; call PetscInitialize(PETSC_NULL_CHARACTER,ierr)
<br>&gt;<br>&gt; call MPI_Comm_rank(PETSC_COMM_WORLD,rank,ierr)<br>&gt;<br>&gt; call MPI_Comm_size(PETSC_COMM_WORLD,nprocs,ierr)<br>&gt;<br>&gt; print *, rank, nprocs<br>&gt;<br>&gt; call PetscFinalize(ierr)<br>&gt;<br>&gt;
<br>&gt;<br>&gt; The answers I get is 0,1 repeated 4 times instead of 0,4 1,4 2,4 3,4.<br>&gt;<br>&gt; I&#39;m using my school&#39;s server&#39;s mpich and it work if I just compile in pure<br>&gt; mpi.<br>&gt;<br>&gt; Btw, if I need to send the job to 4 processors, I need to use a script file:
<br>&gt;<br>&gt; #BSUB -o std-output<br>&gt; #BSUB -q linux_parallel_test<br>&gt; #BSUB -n 4<br>&gt; /usr/lsf6/bin/mpijob_gm /opt/mpich/myrinet/intel/bin/mpirun a.out<br>&gt;<br>&gt; I wonder if the problem lies here...<br>
&gt;<br>&gt;<br>&gt;<br>&gt; Thank you.<br>&gt;<br>&gt;<br>&gt;<br>&gt; On 1/16/07, li pan &lt;<a href="mailto:li76pan@yahoo.com">li76pan@yahoo.com</a>&gt; wrote:<br>&gt; &gt;<br>&gt; &gt; I did try to download and install 
petsc2.3.2, but end<br>&gt; &gt; up with error: mpich can not be download &amp; installed,<br>&gt; &gt; please install mpich for windows manually.<br>&gt; &gt; In the homepage of mpich2, I didn&#39;t choose the version<br>
&gt; &gt; for windows but the source code version. And compiled<br>&gt; &gt; it by myself. Then, I gave the --with-mpi-dir=&quot;install<br>&gt; &gt; dir&quot;. Petsc was configured, and now it&#39;s doing &quot;make&quot;.
<br>&gt; &gt; One interesting thing is, I installed petsc in linux<br>&gt; &gt; before. The mpi libraries were very large<br>&gt; &gt; (libmpich.a==60 mb). But this time in cygwin it was<br>&gt; &gt; only several mbs.<br>
&gt; &gt;<br>&gt; &gt; best<br>&gt; &gt;<br>&gt; &gt; pan<br>&gt; &gt;<br>&gt; &gt;<br>&gt; &gt; --- Ben Tay &lt;<a href="mailto:zonexo@gmail.com">zonexo@gmail.com</a>&gt; wrote:<br>&gt; &gt;<br>&gt; &gt; &gt; hi,<br>&gt; &gt; &gt;
<br>&gt; &gt; &gt; i install PETSc using the following command:<br>&gt; &gt; &gt;<br>&gt; &gt; &gt; ./config/configure.py --with-vendor-compilers=intel<br>&gt; &gt; &gt; --with-gnu-compilers=0<br>&gt; &gt; &gt;<br>&gt; &gt; --with-blas-lapack-dir=/lsftmp/g0306332/inter/mkl/lib/32
<br>&gt; &gt; &gt; --with-mpi-dir=/opt/mpich/intel/ --with-x=0<br>&gt; &gt; &gt; --with-shared<br>&gt; &gt; &gt;<br>&gt; &gt; &gt; then i got:<br>&gt; &gt; &gt;<br>&gt; &gt; &gt; Compilers:<br>&gt; &gt; &gt;<br>&gt; &gt; &gt;&nbsp;&nbsp; C Compiler:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /opt/mpich/intel/bin/mpicc
<br>&gt; &gt; &gt; -fPIC -g<br>&gt; &gt; &gt;&nbsp;&nbsp; Fortran Compiler:&nbsp;&nbsp; /opt/mpich/intel/bin/mpif90<br>&gt; &gt; &gt; -I. -fPIC -g -w90 -w<br>&gt; &gt; &gt; Linkers:<br>&gt; &gt; &gt;&nbsp;&nbsp; Shared linker:&nbsp;&nbsp; /opt/mpich/intel/bin/mpicc
<br>&gt; &gt; &gt; -shared&nbsp;&nbsp;-fPIC -g<br>&gt; &gt; &gt;&nbsp;&nbsp; Dynamic linker:&nbsp;&nbsp; /opt/mpich/intel/bin/mpicc<br>&gt; &gt; &gt; -shared&nbsp;&nbsp;-fPIC -g<br>&gt; &gt; &gt; PETSc:<br>&gt; &gt; &gt;&nbsp;&nbsp; PETSC_ARCH: linux-mpif90<br>&gt; &gt; &gt;&nbsp;&nbsp; PETSC_DIR: /nas/lsftmp/g0306332/petsc-
2.3.2-p8<br>&gt; &gt; &gt;&nbsp;&nbsp; **<br>&gt; &gt; &gt;&nbsp;&nbsp; ** Now build and test the libraries with &quot;make all<br>&gt; &gt; &gt; test&quot;<br>&gt; &gt; &gt;&nbsp;&nbsp; **<br>&gt; &gt; &gt;&nbsp;&nbsp; Clanguage: C<br>&gt; &gt; &gt;&nbsp;&nbsp; Scalar type:real
<br>&gt; &gt; &gt; MPI:<br>&gt; &gt; &gt;&nbsp;&nbsp; Includes: [&#39;/opt/mpich/intel/include&#39;]<br>&gt; &gt; &gt;&nbsp;&nbsp; PETSc shared libraries: enabled<br>&gt; &gt; &gt;&nbsp;&nbsp; PETSc dynamic libraries: disabled<br>&gt; &gt; &gt; BLAS/LAPACK:
<br>&gt; &gt; &gt; -Wl,-rpath,/lsftmp/g0306332/inter/mkl/lib/32<br>&gt; &gt; &gt; -L/lsftmp/g0306332/inter/mkl/lib/32 -lmkl_lapack<br>&gt; &gt; &gt; -lmkl_ia32 -lguide<br>&gt; &gt; &gt;<br>&gt; &gt; &gt; i ran &quot;make all test&quot; and everything seems fine
<br>&gt; &gt; &gt;<br>&gt; &gt; &gt; /opt/mpich/intel/bin/mpicc -c -fPIC -g<br>&gt; &gt; &gt;<br>&gt; &gt; -I/nas/lsftmp/g0306332/petsc-2.3.2-p8-I/nas/lsftmp/g0306332/petsc-<br>&gt; &gt; &gt; 2.3.2-p8/bmake/linux-mpif90<br>
&gt; &gt; &gt; -I/nas/lsftmp/g0306332/petsc-2.3.2-p8/include<br>&gt; &gt; &gt; -I/opt/mpich/intel/include<br>&gt; &gt; &gt; -D__SDIR__=&quot;src/snes/examples/tutorials/&quot; ex19.c<br>&gt; &gt; &gt; /opt/mpich/intel/bin/mpicc -fPIC -g&nbsp;&nbsp;-o ex19
<br>&gt; &gt; &gt; ex19.o-Wl,-rpath,/nas/lsftmp/g0306332/petsc-<br>&gt; &gt; &gt; 2.3.2-p8/lib/linux-mpif90<br>&gt; &gt; &gt;<br>&gt; &gt; -L/nas/lsftmp/g0306332/petsc-2.3.2-p8/lib/linux-mpif90<br>&gt; &gt; &gt; -lpetscsnes -lpetscksp -lpetscdm -lpetscmat
<br>&gt; &gt; &gt; -lpetscvec -lpetsc<br>&gt; &gt; &gt; -Wl,-rpath,/lsftmp/g0306332/inter/mkl/lib/32<br>&gt; &gt; &gt; -L/lsftmp/g0306332/inter/mkl/lib/32 -lmkl_lapack<br>&gt; &gt; &gt; -lmkl_ia32 -lguide<br>&gt; &gt; &gt; -lPEPCF90 -Wl,-rpath,/opt/intel/compiler70/ia32/lib
<br>&gt; &gt; &gt; -Wl,-rpath,/opt/mpich/intel/lib<br>&gt; &gt; &gt; -L/opt/mpich/intel/lib -Wl,-rpath,-rpath<br>&gt; &gt; &gt; -Wl,-rpath,-ldl -L-ldl -lmpich<br>&gt; &gt; &gt; -Wl,-rpath,/opt/intel/compiler70/ia32/lib<br>
&gt; &gt; &gt; -Wl,-rpath,/opt/intel/compiler70/ia32/lib<br>&gt; &gt; &gt; -L/opt/intel/compiler70/ia32/lib<br>&gt; &gt; &gt; -Wl,-rpath,/usr/lib -Wl,-rpath,/usr/lib -L/usr/lib<br>&gt; &gt; &gt; -limf -lirc -lcprts -lcxa<br>
&gt; &gt; &gt; -lunwind -ldl -lmpichf90 -lPEPCF90<br>&gt; &gt; &gt; -Wl,-rpath,/opt/intel/compiler70/ia32/lib<br>&gt; &gt; &gt; -L/opt/intel/compiler70/ia32/lib -Wl,-rpath,/usr/lib<br>&gt; &gt; &gt; -L/usr/lib -lintrins<br>
&gt; &gt; &gt; -lIEPCF90 -lF90 -lm&nbsp;&nbsp;-Wl,-rpath,\ -Wl,-rpath,\ -L\<br>&gt; &gt; &gt; -ldl -lmpich<br>&gt; &gt; &gt; -Wl,-rpath,/opt/intel/compiler70/ia32/lib<br>&gt; &gt; &gt; -L/opt/intel/compiler70/ia32/lib<br>&gt; &gt; &gt; -Wl,-rpath,/usr/lib -L/usr/lib -limf -lirc -lcprts
<br>&gt; &gt; &gt; -lcxa -lunwind -ldl<br>&gt; &gt; &gt; /bin/rm -f ex19.o<br>&gt; &gt; &gt; C/C++ example src/snes/examples/tutorials/ex19 run<br>&gt; &gt; &gt; successfully with 1 MPI<br>&gt; &gt; &gt; process<br>&gt; &gt; &gt; C/C++ example src/snes/examples/tutorials/ex19 run
<br>&gt; &gt; &gt; successfully with 2 MPI<br>&gt; &gt; &gt; processes<br>&gt; &gt; &gt; Fortran example src/snes/examples/tutorials/ex5f run<br>&gt; &gt; &gt; successfully with 1 MPI<br>&gt; &gt; &gt; process<br>&gt; &gt; &gt; Completed test examples
<br>&gt; &gt; &gt;<br>&gt; &gt; &gt; I then tried to run my own parallel code. It&#39;s a<br>&gt; &gt; &gt; simple code which prints<br>&gt; &gt; &gt; the rank of each processor.<br>&gt; &gt; &gt;<br>&gt; &gt; &gt; If I compile the code using just mpif90 
test.F(using<br>&gt; &gt; &gt; just mpif.h)<br>&gt; &gt; &gt;<br>&gt; &gt; &gt; I get 0,1,2,3 (4 processors).<br>&gt; &gt; &gt;<br>&gt; &gt; &gt; however, if i change the code to use petsc.h etc<br>&gt; &gt; &gt; ie.<br>&gt; &gt; &gt;
<br>&gt; &gt; &gt;<br>&gt; &gt; &gt;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; program ns2d_c<br>&gt; &gt; &gt;<br>&gt; &gt; &gt;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; implicit none<br>&gt; &gt; &gt;<br>&gt; &gt; &gt;<br>&gt; &gt; &gt; #include &quot;include/finclude/petsc.h&quot;<br>
&gt; &gt; &gt; #include &quot;include/finclude/petscvec.h&quot;<br>&gt; &gt; &gt; #include &quot;include/finclude/petscmat.h&quot;<br>&gt; &gt; &gt; #include &quot;include/finclude/petscksp.h&quot;<br>&gt; &gt; &gt; #include &quot;include/finclude/petscpc.h&quot;
<br>&gt; &gt; &gt; #include &quot;include/finclude/petscmat.h90&quot;<br>&gt; &gt; &gt;<br>&gt; &gt; &gt;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; integer,parameter :: size_x=8,size_y=4<br>&gt; &gt; &gt;<br>&gt; &gt; &gt;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; integer ::<br>&gt; &gt; &gt; ierr,Istart_p,Iend_p,Ntot,Istart_m,Iend_m,k
<br>&gt; &gt; &gt;<br>&gt; &gt; &gt;&nbsp;&nbsp;PetscMPIInt&nbsp;&nbsp;&nbsp;&nbsp; nprocs,rank<br>&gt; &gt; &gt;<br>&gt; &gt; &gt;<br>&gt; &gt; &gt;<br>&gt; &gt; &gt;<br>&gt; &gt; &gt; call PetscInitialize(PETSC_NULL_CHARACTER,ierr)<br>&gt; &gt; &gt;
<br>&gt; &gt; &gt;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; call<br>&gt; &gt; &gt; MPI_Comm_rank(PETSC_COMM_WORLD,rank,ierr)<br>&gt; &gt; &gt;<br>&gt; &gt; &gt;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; call<br>&gt; &gt; &gt; MPI_Comm_size(PETSC_COMM_WORLD,nprocs,ierr)<br>&gt; &gt; &gt;<br>
&gt; &gt; &gt; end program ns2d_c<br>&gt; &gt; &gt;<br>&gt; &gt; &gt;<br>&gt; &gt; &gt;<br>&gt; &gt; &gt; i then rename the filename to ex2f.F and use &quot;make<br>&gt; &gt; &gt; ex2f&quot;<br>&gt; &gt; &gt;<br>&gt; &gt; &gt; the result I get is something like 0,0,0,0.
<br>&gt; &gt; &gt;<br>&gt; &gt; &gt;<br>&gt; &gt; &gt;<br>&gt; &gt; &gt; Why is this so?<br>&gt; &gt; &gt;<br>&gt; &gt; &gt; Thank you.<br>&gt; &gt; &gt;<br>&gt; &gt;<br>&gt; &gt;<br>&gt; &gt;<br>&gt; &gt;<br>&gt; &gt;<br>
&gt; &gt; ____________________________________________________________________________________<br>&gt; &gt; Never Miss an Email<br>&gt; &gt; Stay connected with Yahoo! Mail on your mobile.&nbsp;&nbsp;Get started!<br>&gt; &gt; <a href="http://mobile.yahoo.com/services?promote=mail">
http://mobile.yahoo.com/services?promote=mail</a><br>&gt; &gt;<br>&gt; &gt;<br>&gt;<br><br></blockquote></div><br>