[mpich-discuss] mpiexec hanged up with large stdin

Pavan Balaji balaji at mcs.anl.gov
Tue Feb 28 11:15:36 CST 2012


[cc'ing mpich-discuss]

If you read the data in your application, this should not hang.  In your 
test program, the application is not reading the data.

As I pointed out, even if your application does not read this should not 
hang, but that's a different problem and should not affect any real 
applications.

  -- Pavan

On 02/28/2012 11:06 AM, Yuheng Xie wrote:
> Yes, I know. My program provides an option: reading data from file or
> stdin. I just found that reading stdin not works as I expect.
>
> 2012/2/28 Pavan Balaji <balaji at mcs.anl.gov <mailto:balaji at mcs.anl.gov>>
>
>
>     Why aren't you reading that data from the application?
>
>     I understand that this shouldn't hang in any case.  So, yes, this is
>     a bug and we'll fix it (eventually), but Jeff's suggestion to
>     directly read from the input file is the correct way to do this.
>       stdin is not the right way to do this, and many MPI
>     implementations don't even support it.
>
>       -- Pavan
>
>
>     On 02/28/2012 09:40 AM, Yuheng Xie wrote:
>
>         Jeff,
>
>         Thank you for your reply! Although the MPI standard does not
>         require it,
>         stdin/stdout and passing data through pipes are traditional
>         conveniences
>         in Unix. I'd like MPICH2 improve on these besides performance.
>         So, at
>         present it seems I have to generate temporary files before using
>         my data.
>
>         --
>         Yuheng
>
>         2012/2/28 Jeff Hammond <jhammond at alcf.anl.gov
>         <mailto:jhammond at alcf.anl.gov>
>         <mailto:jhammond at alcf.anl.gov <mailto:jhammond at alcf.anl.gov>>__>
>
>
>             The MPI standard does not even require stdin to be
>         supported, so it is
>             entirely reasonable for an implementation to support it only
>         up to
>             some reasonable finite value.  If you were to run "cat
>         /dev/random |
>             mpiexec.gforker -n 2 ./test" that program would never return
>         either.
>
>             Since you are intending to use C++, why don't you modify
>         your code to
>             read from an istream associated with a proper input file
>         rather than
>             std::cin?  The source changes should be essentially trivial.
>
>             Jeff
>
>             On Tue, Feb 28, 2012 at 1:15 AM, Yuheng Xie
>         <thinelephant at gmail.com <mailto:thinelephant at gmail.com>
>         <mailto:thinelephant at gmail.com
>         <mailto:thinelephant at gmail.com>__>> wrote:
>          > Hi,
>          >
>          > I started using MPICH2 recently and found a problem about reading
>             large file
>          > (>64KB) from stdin. My program hanged up on the line
>         MPI_Init(&argc,
>          > &argv);. Could anyone help me? Thanks a lot.
>          >
>          > My program: test.cpp
>          > #include <mpi.h>
>          >
>          > int main(int argc, char **argv)
>          > {
>          >     MPI_Init(&argc, &argv);
>          >     MPI_Finalize();
>          >
>          >     return 0;
>          > }
>          >
>          > Compile and run:
>          > mpicxx test.cpp -o test
>          > cat test.txt | mpiexec.gforker -n 2 ./test   # This is OK
>          > cat test.txt | mpiexec.hydra -n 2 ./test   # When test.txt is
>             larger than
>          > 65536 bytes, this won't return.
>          >
>          > The MPICH2 was installed with following commands:
>          > ./configure --prefix=/usr/local --enable-cxx
>          > --enable-fast=O3,nochkmsg,__notiming,ndebug
>         --with-pm=hydra:mpd:gforker
>          > --enable-smpcoll
>          > make
>          > sudo make install
>          >
>          >
>          > _________________________________________________
>          > mpich-discuss mailing list mpich-discuss at mcs.anl.gov
>         <mailto:mpich-discuss at mcs.anl.gov>
>         <mailto:mpich-discuss at mcs.anl.__gov
>         <mailto:mpich-discuss at mcs.anl.gov>>
>
>          > To manage subscription options or unsubscribe:
>          > https://lists.mcs.anl.gov/__mailman/listinfo/mpich-discuss
>         <https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss>
>          >
>
>
>
>             --
>             Jeff Hammond
>             Argonne Leadership Computing Facility
>             University of Chicago Computation Institute
>         jhammond at alcf.anl.gov <mailto:jhammond at alcf.anl.gov>
>         <mailto:jhammond at alcf.anl.gov <mailto:jhammond at alcf.anl.gov>> /
>         (630) 252-5381 <tel:%28630%29%20252-5381>
>
>         http://www.linkedin.com/in/__jeffhammond
>         <http://www.linkedin.com/in/jeffhammond>
>         https://wiki.alcf.anl.gov/old/__index.php/User:Jhammond
>         <https://wiki.alcf.anl.gov/old/index.php/User:Jhammond>
>         https://wiki-old.alcf.anl.gov/__index.php/User:Jhammond
>         <https://wiki-old.alcf.anl.gov/index.php/User:Jhammond>
>             _________________________________________________
>             mpich-discuss mailing list mpich-discuss at mcs.anl.gov
>         <mailto:mpich-discuss at mcs.anl.gov>
>         <mailto:mpich-discuss at mcs.anl.__gov
>         <mailto:mpich-discuss at mcs.anl.gov>>
>
>             To manage subscription options or unsubscribe:
>         https://lists.mcs.anl.gov/__mailman/listinfo/mpich-discuss
>         <https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss>
>
>
>
>
>         _________________________________________________
>         mpich-discuss mailing list mpich-discuss at mcs.anl.gov
>         <mailto:mpich-discuss at mcs.anl.gov>
>         To manage subscription options or unsubscribe:
>         https://lists.mcs.anl.gov/__mailman/listinfo/mpich-discuss
>         <https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss>
>
>
>     --
>     Pavan Balaji
>     http://www.mcs.anl.gov/~balaji <http://www.mcs.anl.gov/%7Ebalaji>
>
>

-- 
Pavan Balaji
http://www.mcs.anl.gov/~balaji


More information about the mpich-discuss mailing list