I think it is time to ask your system administrator for help.<br><br> Matt<br><br><div class="gmail_quote">On Thu, Jul 9, 2009 at 12:02 AM, Yin Feng <span dir="ltr"><<a href="mailto:yfeng1@tigers.lsu.edu">yfeng1@tigers.lsu.edu</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">Firstly, thanks for all your replies!<br>
<br>
I changed compiler to MPICH and tried a sample successfully but the<br>
problem is still there.<br>
I ran my code in 4 nodes and each node have 8 processors. And the<br>
information I saw is like:<br>
NODE LOAD<br>
0 32<br>
1 0<br>
2 0<br>
3 0<br>
<br>
Normally, in that case, we should see is:<br>
NODE LOAD<br>
0 8<br>
1 8<br>
2 8<br>
3 8<br>
<br>
So, anyone got any idea about this?<br>
<br>
Thank you in advance!<br>
<br>
Sincerely,<br>
<font color="#888888">YIN<br>
</font><div><div></div><div class="h5"><br>
On Wed, Jul 8, 2009 at 4:15 PM, Xin Qian<<a href="mailto:chianshin@gmail.com">chianshin@gmail.com</a>> wrote:<br>
> You can try to run sole MPI samples coming with OpenMPI first, make sure the<br>
> OpenMPI is running all right.<br>
><br>
> Thanks,<br>
><br>
> Xin Qian<br>
><br>
> On Wed, Jul 8, 2009 at 4:48 PM, Yin Feng <<a href="mailto:yfeng1@tigers.lsu.edu">yfeng1@tigers.lsu.edu</a>> wrote:<br>
>><br>
>> I tried OpenMPI build PETSc and used mpirun provided by OpenMPI.<br>
>> But, when I check the load on each node, I found the master node take<br>
>> all the load<br>
>> and others are just free.<br>
>><br>
>> Did you have any idea about this situation?<br>
>><br>
>> Thanks in adcance!<br>
>><br>
>> Sincerely,<br>
>> YIN<br>
>><br>
>> On Wed, Jul 8, 2009 at 1:26 PM, Satish Balay<<a href="mailto:balay@mcs.anl.gov">balay@mcs.anl.gov</a>> wrote:<br>
>> > Perhaps you are using the wrong mpiexec or mpirun. You'll have to use<br>
>> > the correspond mpiexec from MPI you've used to build PETSc.<br>
>> ><br>
>> > Or if the MPI has special instruction on usage - you should follow<br>
>> > that [for ex: some clusters require extra options to mpiexec ]<br>
>> ><br>
>> > Satish<br>
>> ><br>
>> > On Wed, 8 Jul 2009, Yin Feng wrote:<br>
>> ><br>
>> >> I am a beginner of PETSc.<br>
>> >> I tried the PETSC example 5(ex5) with 4 nodes,<br>
>> >> However, it seems every nodes doing the exactly the same things and<br>
>> >> output the same results again and again. is this the problem of petsc<br>
>> >> or<br>
>> >> MPI installation?<br>
>> >><br>
>> >> Thank you in adcance!<br>
>> >><br>
>> >> Sincerely,<br>
>> >> YIN<br>
>> >><br>
>> ><br>
>> ><br>
><br>
><br>
><br>
> --<br>
> QIAN, Xin (<a href="http://pubpages.unh.edu/%7Exqian/" target="_blank">http://pubpages.unh.edu/~xqian/</a>)<br>
> <a href="mailto:xqian@unh.edu">xqian@unh.edu</a> <a href="mailto:chianshin@gmail.com">chianshin@gmail.com</a><br>
><br>
</div></div></blockquote></div><br><br clear="all"><br>-- <br>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>
-- Norbert Wiener<br>