Running petsc on infiniband
Satish Balay
balay at mcs.anl.gov
Sat Nov 1 12:14:03 CDT 2008
On Sat, 1 Nov 2008, Saswata Hier-Majumder wrote:
> Greetings,
> I am new to petsc and not sure how to interpret the result I get when running
> petsc example codes (bratu problem and hello world) in an SGI Altix 1300
> cluster. Any help will be greatly appreciated!
>
> I compile the petsc code using a makefile that came with the petsc
> distribution, submit the job to the queue using pbspro, and run using mpirun.
> The solution returned by running the bratu problem
> ($PETSC_DIR/src/snes/examples/tutorials/ex5f90.F) looks correct, but I am
> not sure if petsc is using all processors supplied to it by mpirun.
> MPI_COMM_SIZE always returns a size of 1 and MPI_COMM_RANK always returns rank
> 0, when I call these routines from inside my code, despite using a command
> like
>
> mpirun -n 16 -machinefile $PBS_NODEFILE myexecutable.exe
>
> in my submit shell script.
>
> I get similar results when I run hello world with Petsc. It prints 16 lines
> displaying rank=0 and size=1, when run with 16 processors. Run just with mpi,
> helloworld prints the size and ranks as expected, i.e. 16 different lines with
> size 16 and ranks from 0 to 15.
>
> Am I correct in assuming that petsc is somehow not able use the version of
> mpirun implemented in vltmpi? I reran the petsc using the install shell
> script given below, but it did not help
>
> export PETSC_DIR=$PWD
> ./config/configure.py --with-mpi-dir=/opt/vltmpi/OPENIB/mpi.icc.rsh/bin
This should be --with-mpi-dir=/opt/vltmpi/OPENIB/mpi.icc.rsh
Configure should print a summary of compilers being used. [it should
be /opt/vltmpi/OPENIB/mpi.icc.rsh/bin/mpicc etc..]
If you can use /opt/vltmpi/OPENIB/mpi.icc.rsh/bin/mpicc & mpirun with a
sample MPI test code and run it parallelly - then you should be able to do
the exact same thing with PETSc examples [using the same tools]
Satish
> make all test
>
> Sash Hier-Majumder
>
>
More information about the petsc-users
mailing list