[mpich-discuss] Parallel jobs

Gus Correa gus at ldeo.columbia.edu
Mon Nov 29 13:39:07 CST 2010


Sorry, I have no idea of what visual windows is or does.
Sometimes GUIs are harder to handle than just plain scripts.
Somewhere the GUI you send may have a script to launch the job,
and hopefully there is a way to edit the script, or add stuff
in the drop down menus and forms, and insert
the suggested option in the mpirun command line.

I would guess ABINIT, which seems to be all Linux/Gnu/GPL based,
should work without the GUI (MAPS ?),
may allow you to just launch the job using the mpirun command
directly, which may be less of a hassle than the GUI,
but this is just a guess.

The ABINIT site seems to be down, but check what the Wikipedia
says about it.  There seems to be another GUI for Linux (nanonub):
http://en.wikipedia.org/wiki/ABINIT

You may be better off asking specific questions to the ABINIT support
lists.

Good luck.

Gus Correa


Talla wrote:
> Hello;
> Thank you for the detailed information, however I am using visual 
> windows software with the code impeded in it, so I am not using the 
> commands directly. so I am wondering if there is a way to add the 
> stdin/stdout script? and save them some where in the cluster
> I am attaching a screen shot of the software I am using just to let you 
> know what I am talking about!
> 
> 
> On Mon, Nov 29, 2010 at 9:28 PM, Gus Correa <gus at ldeo.columbia.edu 
> <mailto:gus at ldeo.columbia.edu>> wrote:
> 
>     Hi Talla
> 
>     It is no rocket science, just add them to your mpirun command.
>     Say:
> 
>     Old command (whatever you have there):
>     mpirun -n 16 my_program
> 
>     Modified command (whatever you had there with "-s all" added):
>     mpirun -s all -n 16 my_program
> 
>     I am not even sure if it will work, but it is worth giving it a try,
>     as all processes seem to be trying to read from stdin,
>     for which in MPICH2 seems to require the specific flag '-s all'.
> 
>     ***
> 
>     In a different key,
>     looking at your messages, and the paths that you refer to,
>     it seems to me that you may be mixing MPICH2 and OpenMPI.
>     At least the test program you mention this:
> 
> 
>      >         I tested the cluster using the following command:
>      >         /opt/openmpi/bin/mpirun -nolocal np 16 -machinefile machines
>      >         /opt/mpi-test/bin/mpi-ring
> 
> 
>     In particular, the mpirun commands of each MPI are quite different,
>     and to some extent so is the hardware support.
>     This mixing is a very common source of frustration.
> 
>     Both MPICH2 and OpenMPI follow the MPI standard.
>     However, they are different beasts,
>     and mixing them will not work at all.
> 
>     Stick to one of them only, for compilation (say mpicc or mpif90)
>     and to run the program (mpirun/mpiexec).
>     In doubt, use full path names for all (mpicc, mpif90, mpirun, etc).
>     This mailing list is the support list for MPICH2:
>     http://www.mcs.anl.gov/research/projects/mpich2/
> 
>     For OpenMPI you need to look elsewhere:
>     http://www.open-mpi.org/
> 
>     You may want to check first how your (ABINIT) computational Chemistry
>     program was compiled.
>     Computational Chemistry is *not* my league, I have no idea of what the
>     heck ABINIT does, probably some many-body Schroedinger equation, and
>     I forgot Quantum Mechanics a while ago.
>     Don't ask me about it.
> 
>     If your ABINIT came pre-compiled you need to stick with
>     whatever flavor (and even version) of MPI that they used.
>     If you compiled it yourself, you need to stick to the MPI flavor and
>     version associated to the *same* MPI compiler wrapper you used to
>     compile it.
>     I.e. mpicc or mpif90 *and* mpirun/mpiexec must be from the same MPI
>     flavor and version.
>     Take a look at the Makefiles, perhaps at the configure scripts,
>     they may give a hint.
> 
>     Good luck.
> 
> 
>     My two cents,
>     Gus Correa
> 
>     Talla wrote:
> 
>         Hello Gus,
>         Thank you for pointing me to this error, but I am new to Linux
>         world so I am wondering if you have a ready script or commands
>         to add stdin flags to mpirun?
>          Thanks
> 
>         On Mon, Nov 29, 2010 at 7:34 PM, Gus Correa
>         <gus at ldeo.columbia.edu <mailto:gus at ldeo.columbia.edu>
>         <mailto:gus at ldeo.columbia.edu <mailto:gus at ldeo.columbia.edu>>>
>         wrote:
> 
>            Hi Talla
> 
>            It sounds like all processes are trying to read from stdin,
>         right?
>            Stdin/stdout are not guaranteed to be available to all processes.
> 
>            Have you tried to add these flags in your mpiexec/mpirun
>            command line:  "-s all" ?
>            Check the meaning with "man mpiexec".
> 
>            My two cents,
>            Gus Correa
> 
>            Talla wrote:
> 
>                Hello,
> 
>                 this issue is consuming all my time with no luck.  I can
>         submit
>                any job without any error message *BUT ONLY* when I am using
>                *one* CPU, when I am using more than one CPU, I got the error
>                message : " *forrtl: severe (24): end-of-file during
>         read, unit
>                5, file stdin* ". I got this error line repeated as many
>         as the
>                number of CPU's I am using.
> 
>                Which mean that the other nodes are not doing any job
>         here. So
>                if you can help me to link the other nodes so I can take
>                advantage of all the nodes.
>                I have 8 PC and each one has 2 CPU (in total I have 16 CPU).
> 
>                I tested the cluster using the following command:
>                /opt/openmpi/bin/mpirun -nolocal np 16 -machinefile machines
>                /opt/mpi-test/bin/mpi-ring
> 
>                and all the nodes can send and receive data like a charm.
> 
>                Just to mention that I have Rocks Clustering software
>         with open
>                centOS.
> 
>                The code I am using is called ABINIT and it is impeded as a
>                plugin in a visual software called MAPS.
> 
>                Your help is really appreciated.
> 
> 
> 
> 
>              
>          ------------------------------------------------------------------------
> 
>                _______________________________________________
>                mpich-discuss mailing list
>                mpich-discuss at mcs.anl.gov
>         <mailto:mpich-discuss at mcs.anl.gov>
>         <mailto:mpich-discuss at mcs.anl.gov
>         <mailto:mpich-discuss at mcs.anl.gov>>
> 
>                https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
> 
> 
>            _______________________________________________
>            mpich-discuss mailing list
>            mpich-discuss at mcs.anl.gov <mailto:mpich-discuss at mcs.anl.gov>
>         <mailto:mpich-discuss at mcs.anl.gov
>         <mailto:mpich-discuss at mcs.anl.gov>>
> 
>            https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
> 
> 
> 
> 
>         ------------------------------------------------------------------------
> 
>         _______________________________________________
>         mpich-discuss mailing list
>         mpich-discuss at mcs.anl.gov <mailto:mpich-discuss at mcs.anl.gov>
>         https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
> 
> 
>     _______________________________________________
>     mpich-discuss mailing list
>     mpich-discuss at mcs.anl.gov <mailto:mpich-discuss at mcs.anl.gov>
>     https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
> 
> 
> 
> 
> ------------------------------------------------------------------------
> 
> _______________________________________________
> mpich-discuss mailing list
> mpich-discuss at mcs.anl.gov
> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss



More information about the mpich-discuss mailing list