[mpich-discuss] processor print out

Dave Goodell goodell at mcs.anl.gov
Fri May 20 12:57:35 CDT 2011


Again, *please* keep mpich-discuss@ CCed.  This mailing list is publicly archived and those archives frequently help others to solve their problems without any intervention from us, the MPICH2 developers.  That saves everyone lots of time.

I strongly recommend against SMPD if you have the option of using hydra.  Hydra is more robust, actively maintained, and has more features.  Bug reports for SMPD will generally be closed as "wontfix".  We only recommend SMPD for Windows because it is currently the only option available on that platform.

-Dave

On May 20, 2011, at 12:49 PM CDT, Ryan Crocker wrote:

> I'm on a Unix machine so i used SMPD and that solved the problem.  Thanks for pointing me in the right direction.
> 
> -Ryan
> 
> On May 20, 2011, at 12:13 PM, Dave Goodell wrote:
> 
>> [please keep mpich-discuss@ CCed]
>> 
>> Can you try hydra instead of gforker?  That may or may not help, depending on what the real problem is.
>> 
>> Is your code written in Fortran or C?  MPICH2's MPI_Init call does its best to disable buffering inside the C standard library (setvbuf), but I don't think it can do as much to help with the Fortran side of things:
>> 
>> http://wiki.mcs.anl.gov/mpich2/index.php/Frequently_Asked_Questions#Q:_My_output_does_not_appear_until_the_program_exits.
>> 
>> If it is a fortran program you might have to modify it in order to incorporate those flush calls.
>> 
>> -Dave
>> 
>> On May 20, 2011, at 9:57 AM CDT, Ryan Campbell Crocker wrote:
>> 
>>> I'm using MPICH2-1.3.2p1, with gforker, and i'm with on a workstation where i have direct access to mpiexec.
>>> 
>>> Quoting Dave Goodell <goodell at mcs.anl.gov>:
>>> 
>>>> On May 20, 2011, at 9:34 AM CDT, Ryan Campbell Crocker wrote:
>>>> 
>>>>> I am using a code developed by someone else and it seems like the  I/O (print,write) is turned off or blocked for all but the root  processor.  What MPI call would be responsible for this.  I'm sure  this is a simple fix.
>>>> 
>>>> What version of MPICH2 are you using?  Which process manager?  Is  this on a cluster where you have direct access via "mpiexec" or is  there a batch scheduling system?
>>>> 
>>>> This isn't behavior that is controlled (or even specified) by the  MPI standard or any MPI routines.  It's entirely  implementation-dependent behavior that is typically influenced most  heavily by the process manager.  AFAIK all stock MPICH2 process  managers (even the deprecated ones) should support this correctly  when used normally, but batch scheduling systems like PBS or SLURM  might interfere.
>>>> 
>>>> -Dave
>>>> 
>>>> _______________________________________________
>>>> mpich-discuss mailing list
>>>> mpich-discuss at mcs.anl.gov
>>>> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
>>>> 
>>> 
>>> 
>>> 
>> 
> 
> Ryan Crocker
> University of Vermont, School of Engineering
> Mechanical Engineering Department
> rcrocker at uvm.edu
> 315-212-7331
> 



More information about the mpich-discuss mailing list