<div>Hi&nbsp;Pan,</div>
<div>&nbsp;</div>
<div>I also got very big library files if I use PETSc with mpich2. </div>
<div>&nbsp;</div>
<div>Btw, I have tried several options but I still don&#39;t understand why I can&#39;t get mpi to work with PETSc.</div>
<div>&nbsp;</div>
<div>The 4 processors are running together but each running its own code. </div>
<div>&nbsp;</div>
<div>I just use</div>
<div>&nbsp;</div>
<div>
<p>integer :: nprocs,rank,ierr</p>
<p>&nbsp;call PetscInitialize(PETSC_NULL_CHARACTER,ierr)</p>
<p>&nbsp;call MPI_Comm_rank(PETSC_COMM_WORLD,rank,ierr)<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <br>&nbsp;call MPI_Comm_size(PETSC_COMM_WORLD,nprocs,ierr)</p>
<p>&nbsp;print *, rank, nprocs</p>
<p>&nbsp;call PetscFinalize(ierr)</p>
<p>&nbsp;</p>
<p>The answers I get is 0,1 repeated 4 times instead of 0,4 1,4 2,4 3,4.</p>
<p>I&#39;m using my school&#39;s server&#39;s mpich and it work if I just compile in pure mpi.</p>
<p>Btw, if I need to send the job to 4 processors, I need to use a script file:</p>
<p>#BSUB -o std-output<br>#BSUB -q linux_parallel_test<br>#BSUB -n 4<br>/usr/lsf6/bin/mpijob_gm /opt/mpich/myrinet/intel/bin/mpirun a.out</p>
<p>I wonder if the problem lies here...</p>
<p>&nbsp;</p>
<p>Thank you.</p></div>
<div><br><br>&nbsp;</div>
<div><span class="gmail_quote">On 1/16/07, <b class="gmail_sendername">li pan</b> &lt;<a href="mailto:li76pan@yahoo.com">li76pan@yahoo.com</a>&gt; wrote:</span>
<blockquote class="gmail_quote" style="PADDING-LEFT: 1ex; MARGIN: 0px 0px 0px 0.8ex; BORDER-LEFT: #ccc 1px solid">I did try to download and install petsc2.3.2, but end<br>up with error: mpich can not be download &amp; installed,
<br>please install mpich for windows manually.<br>In the homepage of mpich2, I didn&#39;t choose the version<br>for windows but the source code version. And compiled<br>it by myself. Then, I gave the --with-mpi-dir=&quot;install
<br>dir&quot;. Petsc was configured, and now it&#39;s doing &quot;make&quot;.<br>One interesting thing is, I installed petsc in linux<br>before. The mpi libraries were very large<br>(libmpich.a==60 mb). But this time in cygwin it was
<br>only several mbs.<br><br>best<br><br>pan<br><br><br>--- Ben Tay &lt;<a href="mailto:zonexo@gmail.com">zonexo@gmail.com</a>&gt; wrote:<br><br>&gt; hi,<br>&gt;<br>&gt; i install PETSc using the following command:<br>&gt;
<br>&gt; ./config/configure.py --with-vendor-compilers=intel<br>&gt; --with-gnu-compilers=0<br>&gt;<br>--with-blas-lapack-dir=/lsftmp/g0306332/inter/mkl/lib/32<br>&gt; --with-mpi-dir=/opt/mpich/intel/ --with-x=0<br>&gt; --with-shared
<br>&gt;<br>&gt; then i got:<br>&gt;<br>&gt; Compilers:<br>&gt;<br>&gt;&nbsp;&nbsp; C Compiler:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /opt/mpich/intel/bin/mpicc<br>&gt; -fPIC -g<br>&gt;&nbsp;&nbsp; Fortran Compiler:&nbsp;&nbsp; /opt/mpich/intel/bin/mpif90<br>&gt; -I. -fPIC -g -w90 -w
<br>&gt; Linkers:<br>&gt;&nbsp;&nbsp; Shared linker:&nbsp;&nbsp; /opt/mpich/intel/bin/mpicc<br>&gt; -shared&nbsp;&nbsp;-fPIC -g<br>&gt;&nbsp;&nbsp; Dynamic linker:&nbsp;&nbsp; /opt/mpich/intel/bin/mpicc<br>&gt; -shared&nbsp;&nbsp;-fPIC -g<br>&gt; PETSc:<br>&gt;&nbsp;&nbsp; PETSC_ARCH: linux-mpif90
<br>&gt;&nbsp;&nbsp; PETSC_DIR: /nas/lsftmp/g0306332/petsc-2.3.2-p8<br>&gt;&nbsp;&nbsp; **<br>&gt;&nbsp;&nbsp; ** Now build and test the libraries with &quot;make all<br>&gt; test&quot;<br>&gt;&nbsp;&nbsp; **<br>&gt;&nbsp;&nbsp; Clanguage: C<br>&gt;&nbsp;&nbsp; Scalar type:real<br>
&gt; MPI:<br>&gt;&nbsp;&nbsp; Includes: [&#39;/opt/mpich/intel/include&#39;]<br>&gt;&nbsp;&nbsp; PETSc shared libraries: enabled<br>&gt;&nbsp;&nbsp; PETSc dynamic libraries: disabled<br>&gt; BLAS/LAPACK:<br>&gt; -Wl,-rpath,/lsftmp/g0306332/inter/mkl/lib/32
<br>&gt; -L/lsftmp/g0306332/inter/mkl/lib/32 -lmkl_lapack<br>&gt; -lmkl_ia32 -lguide<br>&gt;<br>&gt; i ran &quot;make all test&quot; and everything seems fine<br>&gt;<br>&gt; /opt/mpich/intel/bin/mpicc -c -fPIC -g<br>&gt;
<br>-I/nas/lsftmp/g0306332/petsc-2.3.2-p8-I/nas/lsftmp/g0306332/petsc-<br>&gt; 2.3.2-p8/bmake/linux-mpif90<br>&gt; -I/nas/lsftmp/g0306332/petsc-2.3.2-p8/include<br>&gt; -I/opt/mpich/intel/include<br>&gt; -D__SDIR__=&quot;src/snes/examples/tutorials/&quot; 
ex19.c<br>&gt; /opt/mpich/intel/bin/mpicc -fPIC -g&nbsp;&nbsp;-o ex19<br>&gt; ex19.o-Wl,-rpath,/nas/lsftmp/g0306332/petsc-<br>&gt; 2.3.2-p8/lib/linux-mpif90<br>&gt;<br>-L/nas/lsftmp/g0306332/petsc-2.3.2-p8/lib/linux-mpif90<br>&gt; -lpetscsnes -lpetscksp -lpetscdm -lpetscmat
<br>&gt; -lpetscvec -lpetsc<br>&gt; -Wl,-rpath,/lsftmp/g0306332/inter/mkl/lib/32<br>&gt; -L/lsftmp/g0306332/inter/mkl/lib/32 -lmkl_lapack<br>&gt; -lmkl_ia32 -lguide<br>&gt; -lPEPCF90 -Wl,-rpath,/opt/intel/compiler70/ia32/lib
<br>&gt; -Wl,-rpath,/opt/mpich/intel/lib<br>&gt; -L/opt/mpich/intel/lib -Wl,-rpath,-rpath<br>&gt; -Wl,-rpath,-ldl -L-ldl -lmpich<br>&gt; -Wl,-rpath,/opt/intel/compiler70/ia32/lib<br>&gt; -Wl,-rpath,/opt/intel/compiler70/ia32/lib
<br>&gt; -L/opt/intel/compiler70/ia32/lib<br>&gt; -Wl,-rpath,/usr/lib -Wl,-rpath,/usr/lib -L/usr/lib<br>&gt; -limf -lirc -lcprts -lcxa<br>&gt; -lunwind -ldl -lmpichf90 -lPEPCF90<br>&gt; -Wl,-rpath,/opt/intel/compiler70/ia32/lib
<br>&gt; -L/opt/intel/compiler70/ia32/lib -Wl,-rpath,/usr/lib<br>&gt; -L/usr/lib -lintrins<br>&gt; -lIEPCF90 -lF90 -lm&nbsp;&nbsp;-Wl,-rpath,\ -Wl,-rpath,\ -L\<br>&gt; -ldl -lmpich<br>&gt; -Wl,-rpath,/opt/intel/compiler70/ia32/lib<br>
&gt; -L/opt/intel/compiler70/ia32/lib<br>&gt; -Wl,-rpath,/usr/lib -L/usr/lib -limf -lirc -lcprts<br>&gt; -lcxa -lunwind -ldl<br>&gt; /bin/rm -f ex19.o<br>&gt; C/C++ example src/snes/examples/tutorials/ex19 run<br>&gt; successfully with 1 MPI
<br>&gt; process<br>&gt; C/C++ example src/snes/examples/tutorials/ex19 run<br>&gt; successfully with 2 MPI<br>&gt; processes<br>&gt; Fortran example src/snes/examples/tutorials/ex5f run<br>&gt; successfully with 1 MPI<br>
&gt; process<br>&gt; Completed test examples<br>&gt;<br>&gt; I then tried to run my own parallel code. It&#39;s a<br>&gt; simple code which prints<br>&gt; the rank of each processor.<br>&gt;<br>&gt; If I compile the code using just mpif90 
test.F(using<br>&gt; just mpif.h)<br>&gt;<br>&gt; I get 0,1,2,3 (4 processors).<br>&gt;<br>&gt; however, if i change the code to use petsc.h etc<br>&gt; ie.<br>&gt;<br>&gt;<br>&gt;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; program ns2d_c<br>&gt;<br>&gt;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; implicit none
<br>&gt;<br>&gt;<br>&gt; #include &quot;include/finclude/petsc.h&quot;<br>&gt; #include &quot;include/finclude/petscvec.h&quot;<br>&gt; #include &quot;include/finclude/petscmat.h&quot;<br>&gt; #include &quot;include/finclude/petscksp.h&quot;
<br>&gt; #include &quot;include/finclude/petscpc.h&quot;<br>&gt; #include &quot;include/finclude/petscmat.h90&quot;<br>&gt;<br>&gt;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; integer,parameter :: size_x=8,size_y=4<br>&gt;<br>&gt;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; integer ::<br>&gt; ierr,Istart_p,Iend_p,Ntot,Istart_m,Iend_m,k
<br>&gt;<br>&gt;&nbsp;&nbsp;PetscMPIInt&nbsp;&nbsp;&nbsp;&nbsp; nprocs,rank<br>&gt;<br>&gt;<br>&gt;<br>&gt;<br>&gt; call PetscInitialize(PETSC_NULL_CHARACTER,ierr)<br>&gt;<br>&gt;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; call<br>&gt; MPI_Comm_rank(PETSC_COMM_WORLD,rank,ierr)<br>&gt;<br>
&gt;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; call<br>&gt; MPI_Comm_size(PETSC_COMM_WORLD,nprocs,ierr)<br>&gt;<br>&gt; end program ns2d_c<br>&gt;<br>&gt;<br>&gt;<br>&gt; i then rename the filename to ex2f.F and use &quot;make<br>&gt; ex2f&quot;<br>&gt;<br>
&gt; the result I get is something like 0,0,0,0.<br>&gt;<br>&gt;<br>&gt;<br>&gt; Why is this so?<br>&gt;<br>&gt; Thank you.<br>&gt;<br><br><br><br><br>____________________________________________________________________________________
<br>Never Miss an Email<br>Stay connected with Yahoo! Mail on your mobile.&nbsp;&nbsp;Get started!<br><a href="http://mobile.yahoo.com/services?promote=mail">http://mobile.yahoo.com/services?promote=mail</a><br><br></blockquote></div>
<br>