Hi Rajiv,<br><br>Thanks for reply.<br>I first tried to load the using the wrapper command that is <br><br> mpif90 -o prog.f90 prog<br><br>then I submitted the script as below :<br><br><font><font size="2">#/bin/bash<br>
#PBS -q compute<br>
#PBS -N test_job<br>
# Request 1 Node with 12 Processors<br>
#PBS -l nodes=1:ppn=12<br>
#PBS -l walltime=100:00:00<br>
#PBS -S /bin/bash<br>
#PBS -M <a href="mailto:your_email@lboro.ac.uk">your_email@lboro.ac.uk</a><br>
#PBS -m bae<br>
#PBS -A your_account12345<br>
#<br>
# Go to the directory from which you submitted the job<br>
cd $PBS_O_WORKDIR<br>
<br>
module load intel_compilers<br>
module load bullxmpi<br>
<br>
mpirun ./Multi_beta<br><br>but still the same error I am getting which is as below:<br><br>running mpdallexit on hydra127<br>LAUNCHED mpd on hydra127 via<br>RUNNING: mpd on hydra127<br> Total Nb of PE: 1<br><br>
PE# 0 / 1 OK<br>PE# 0 0 0 0<br>PE# 0 0 33 0 165 0 65<br>PE# 0 -1 1 -1 -1 -1 -1<br> PE_Table, PE# 0 complete<br>PE# 0 -0.03 0.98 -1.00 1.00 -0.03 1.97<br> PE# 0 doesn t intersect any bloc<br>
PE# 0 will communicate with 0<br> single value<br> PE# 0 has 1 com. boundaries<br> Data_Read, PE# 0 complete<br><br> PE# 0 checking boundary type for<br>
0 1 1 1 0 165 0 65 nor sur sur sur gra 1 0 0<br> 0 2 33 33 0 165 0 65 EXC -> 1<br> 0 3 0 33 1 1 0 65 sur nor sur sur gra 0 1 0<br> 0 4 0 33 164 164 0 65 sur nor sur sur gra 0 -1 0<br>
0 5 0 33 0 165 1 1 cyc cyc cyc sur cyc 0 0 1<br> 0 6 0 33 0 165 64 64 cyc cyc cyc sur cyc 0 0 -1<br> PE# 0 Set new<br> PE# 0 FFT Table<br> PE# 0 Coeff<br>Fatal error in MPI_Send: Invalid rank, error stack:<br>
MPI_Send(176): MPI_Send(buf=0x7fff9425c388, count=1, MPI_DOUBLE_PRECISION, dest=1, tag=1, MPI_COMM_WORLD) failed<br>MPI_Send(98).: Invalid rank has value 1 but must be nonnegative and less than 1<br>rank 0 in job 1 hydra127_37620 caused collective abort of all ranks<br>
exit status of rank 0: return code 1<br>~<br>I am struggling to find the error but I am not sure where I mess up. if I ma runnign the other examples it is ok.<br><br>Thanks and Regards<br><br></font></font><div class="gmail_quote">
On Fri, Feb 25, 2011 at 4:26 PM, Rajeev Thakur <span dir="ltr"><<a href="mailto:thakur@mcs.anl.gov">thakur@mcs.anl.gov</a>></span> wrote:<br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
For some reason, each process thinks the total number of processes in the parallel job is 1. Check the wrapper script and try to run by hand using mpiexec. Also try running the cpi example from the examples directory and see if it runs correctly.<br>
<br>
Rajeev<br>
<div><div></div><div class="h5"><br>
On Feb 25, 2011, at 9:43 AM, Ashwinkumar Dobariya wrote:<br>
<br>
> Hello everyone,<br>
><br>
> I am newbie here. I am running the code for Large eddy simulation of turbulent flow. I am compiling the code using wrapper command and running the code on Hydra cluster. when I am submitting the script file it is showing the following error.<br>
><br>
> running mpdallexit on hydra127<br>
> LAUNCHED mpd on hydra127 via<br>
> RUNNING: mpd on hydra127<br>
> LAUNCHED mpd on hydra118 via hydra127<br>
> RUNNING: mpd on hydra118<br>
> Fatal error in MPI_Send: Invalid rank, error stack:<br>
> MPI_Send(176): MPI_Send(buf=0x7fffa7a1e4a8, count=1, MPI_DOUBLE_PRECISION, dest=1, tag=1, MPI_COMM_WORLD) failed<br>
> MPI_Send(98).: Invalid rank has value 1 but must be nonnegative and less than 1<br>
> Total Nb of PE: 1<br>
><br>
> PE# 0 / 1 OK<br>
> PE# 0 0 0 0<br>
> PE# 0 0 33 0 165 0 33<br>
> PE# 0 -1 1 -1 -1 -1 8<br>
> PE_Table, PE# 0 complete<br>
> PE# 0 -0.03 0.98 -1.00 1.00 -0.03 0.98<br>
> PE# 0 doesn t intersect any bloc<br>
> PE# 0 will communicate with 0<br>
> single value<br>
> PE# 0 has 2 com. boundaries<br>
> Data_Read, PE# 0 complete<br>
><br>
> PE# 0 checking boundary type for<br>
> 0 1 1 1 0 165 0 33 nor sur sur sur gra 1 0 0<br>
> 0 2 33 33 0 165 0 33 EXC -> 1<br>
> 0 3 0 33 1 1 0 33 sur nor sur sur gra 0 1 0<br>
> 0 4 0 33 164 164 0 33 sur nor sur sur gra 0 -1 0<br>
> 0 5 0 33 0 165 1 1 cyc cyc cyc sur cyc 0 0 1<br>
> 0 6 0 33 0 165 33 33 EXC -> 8<br>
> PE# 0 Set new<br>
> PE# 0 FFT Table<br>
> PE# 0 Coeff<br>
> rank 0 in job 1 hydra127_34565 caused collective abort of all ranks<br>
> exit status of rank 0: return code 1<br>
><br>
> I am struggling to find the error in my code. can anybody suggest me where I messed up.<br>
><br>
> Thanks and Regards,<br>
</div></div>> Ash _______________________________________________<br>
> mpich-discuss mailing list<br>
> <a href="mailto:mpich-discuss@mcs.anl.gov">mpich-discuss@mcs.anl.gov</a><br>
> <a href="https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss" target="_blank">https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss</a><br>
<br>
_______________________________________________<br>
mpich-discuss mailing list<br>
<a href="mailto:mpich-discuss@mcs.anl.gov">mpich-discuss@mcs.anl.gov</a><br>
<a href="https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss" target="_blank">https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss</a><br>
</blockquote></div><br>