Running petsc on infiniband

Saswata Hier-Majumder saswata at umd.edu
Sat Nov 1 13:20:34 CDT 2008


Satish,
Thanks for the prompt response!

After rerunning the installation script with the correct path, I found a 
bunch of error messages in the configure.log file which look like

sh: /opt/vltmpi/OPENIB/mpi.icc.rsh/bin/mpif90 -o conftest conftest.o
Executing: /opt/vltmpi/OPENIB/mpi.icc.rsh/bin/mpif90 -o conftest conftest.o
sh:
Possible ERROR while running linker: 
/opt/intel/fce/10.0.023/lib/libimf.so: warning: warning: feupdateenv is 
not implemented and will always fail

I recognize the last part of the error message. I need to use the option 
-nochoicemod while compiling an mpi code with mpif90 to avoid this 
error. Is there any way for me to pass that option to the petsc 
installer? I tried
--with-fc=mpif90 -nochoicemod
in the install shell script, but that did not seem to work.

Thanks
Sash

Satish Balay wrote:
> On Sat, 1 Nov 2008, Saswata Hier-Majumder wrote:
>
>   
>> Greetings,
>> I am new to petsc and not sure how to interpret the result I get when running
>> petsc example codes (bratu problem and hello world) in an SGI Altix 1300
>> cluster. Any help will be greatly appreciated!
>>
>> I compile the petsc code using a makefile that came with the petsc
>> distribution, submit  the job to the queue using pbspro, and run using mpirun.
>> The solution returned by running the bratu problem
>> ($PETSC_DIR/src/snes/examples/tutorials/ex5f90.F)   looks correct, but I am
>> not sure if petsc is using all processors supplied to it by mpirun.
>> MPI_COMM_SIZE always returns a size of 1 and MPI_COMM_RANK always returns rank
>> 0, when I call these routines from inside my code, despite using a command
>> like
>>
>> mpirun -n 16 -machinefile $PBS_NODEFILE myexecutable.exe
>>
>> in my submit shell script.
>>
>> I get similar results when I run hello world with Petsc. It prints 16 lines
>> displaying rank=0 and size=1, when run with 16 processors. Run just with mpi,
>> helloworld prints the size and ranks as expected, i.e. 16 different lines with
>> size 16 and ranks from 0 to 15.
>>
>> Am I correct in assuming that petsc is somehow not able use the version of
>> mpirun implemented in vltmpi? I  reran the petsc using the install shell
>> script given below, but it did not help
>>
>> export PETSC_DIR=$PWD
>> ./config/configure.py --with-mpi-dir=/opt/vltmpi/OPENIB/mpi.icc.rsh/bin
>>     
>
> This should be --with-mpi-dir=/opt/vltmpi/OPENIB/mpi.icc.rsh
>
> Configure should print a summary of compilers being used. [it should
> be /opt/vltmpi/OPENIB/mpi.icc.rsh/bin/mpicc etc..]
>
> If you can use /opt/vltmpi/OPENIB/mpi.icc.rsh/bin/mpicc & mpirun with a
> sample MPI test code and run it parallelly - then you should be able to do
> the exact same thing with PETSc examples [using the same tools]
>
> Satish
>
>   
>> make all test
>>
>> Sash Hier-Majumder
>>
>>
>>     
>
>   

-- 
www.geol.umd.edu/~saswata




More information about the petsc-users mailing list