Running petsc on infiniband
Saswata Hier-Majumder
saswata at umd.edu
Sat Nov 1 11:55:35 CDT 2008
Greetings,
I am new to petsc and not sure how to interpret the result I get when
running petsc example codes (bratu problem and hello world) in an SGI
Altix 1300 cluster. Any help will be greatly appreciated!
I compile the petsc code using a makefile that came with the petsc
distribution, submit the job to the queue using pbspro, and run using
mpirun. The solution returned by running the bratu problem
($PETSC_DIR/src/snes/examples/tutorials/ex5f90.F) looks correct, but I
am not sure if petsc is using all processors supplied to it by mpirun.
MPI_COMM_SIZE always returns a size of 1 and MPI_COMM_RANK always
returns rank 0, when I call these routines from inside my code, despite
using a command like
mpirun -n 16 -machinefile $PBS_NODEFILE myexecutable.exe
in my submit shell script.
I get similar results when I run hello world with Petsc. It prints 16
lines displaying rank=0 and size=1, when run with 16 processors. Run
just with mpi, helloworld prints the size and ranks as expected, i.e. 16
different lines with size 16 and ranks from 0 to 15.
Am I correct in assuming that petsc is somehow not able use the version
of mpirun implemented in vltmpi? I reran the petsc using the install
shell script given below, but it did not help
export PETSC_DIR=$PWD
./config/configure.py --with-mpi-dir=/opt/vltmpi/OPENIB/mpi.icc.rsh/bin
make all test
Sash Hier-Majumder
--
www.geol.umd.edu/~saswata
More information about the petsc-users
mailing list