[mpich-discuss] processor print out
Dave Goodell
goodell at mcs.anl.gov
Fri May 20 11:13:51 CDT 2011
[please keep mpich-discuss@ CCed]
Can you try hydra instead of gforker? That may or may not help, depending on what the real problem is.
Is your code written in Fortran or C? MPICH2's MPI_Init call does its best to disable buffering inside the C standard library (setvbuf), but I don't think it can do as much to help with the Fortran side of things:
http://wiki.mcs.anl.gov/mpich2/index.php/Frequently_Asked_Questions#Q:_My_output_does_not_appear_until_the_program_exits.
If it is a fortran program you might have to modify it in order to incorporate those flush calls.
-Dave
On May 20, 2011, at 9:57 AM CDT, Ryan Campbell Crocker wrote:
> I'm using MPICH2-1.3.2p1, with gforker, and i'm with on a workstation where i have direct access to mpiexec.
>
> Quoting Dave Goodell <goodell at mcs.anl.gov>:
>
>> On May 20, 2011, at 9:34 AM CDT, Ryan Campbell Crocker wrote:
>>
>>> I am using a code developed by someone else and it seems like the I/O (print,write) is turned off or blocked for all but the root processor. What MPI call would be responsible for this. I'm sure this is a simple fix.
>>
>> What version of MPICH2 are you using? Which process manager? Is this on a cluster where you have direct access via "mpiexec" or is there a batch scheduling system?
>>
>> This isn't behavior that is controlled (or even specified) by the MPI standard or any MPI routines. It's entirely implementation-dependent behavior that is typically influenced most heavily by the process manager. AFAIK all stock MPICH2 process managers (even the deprecated ones) should support this correctly when used normally, but batch scheduling systems like PBS or SLURM might interfere.
>>
>> -Dave
>>
>> _______________________________________________
>> mpich-discuss mailing list
>> mpich-discuss at mcs.anl.gov
>> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
>>
>
>
>
More information about the mpich-discuss
mailing list