[mpich-discuss] Can I run MPI program without mpirun/mpiexec?

Dave Goodell goodell at mcs.anl.gov
Wed Feb 24 10:46:24 CST 2010


ch3:shm is old and unsupported at this point.  I would not recommend  
going down that path.

We don't have access to any alpha hosts, so we don't have specific  
atomic instruction support for it.  This will be a problem in  
ch3:nemesis as well, although those atomic operations come from a  
separate, newer library.  You can still run on an alpha machine by  
emulating atomic access with locks, which it appears you have already  
done.

The error code that you are getting back just indicates some failure  
occurred when initializing the pthread mutex used for emulation.   
You'll have to use a debugger, ltrace, strace, or something similar to  
figure out why that is failing if you want to try to make this work.

You could try ch3:sock instead as long as you don't mind using TCP for  
communication.  This channel shouldn't have any dependence on atomic  
assembly instructions.

-Dave

On Feb 24, 2010, at 1:50 AM, John Xu wrote:

> In src/mpid/ch3/channels/shm/configure, should the following code to
> be conditioned with
> if test "$cross_compiling" = yes; then  ?
>
> -----------------------------------------------
> # Check for memory atomic instructions
>
> { $as_echo "$as_me:$LINENO: checking for x86 mfence instruction using
> __asm__" >&5
> $as_echo_n "checking for x86 mfence instruction using __asm__... "  
> >&6; }
> if test "${pac_cv_have_gcc_asm_and_x86_mfence+set}" = set; then
>  $as_echo_n "(cached) " >&6
> else
>
> I am not familiar with this at all, just wondering if there is
> anything that could be done here.
>
> thanks,
> john
>
> On Wed, Feb 24, 2010 at 1:14 AM, John Xu <johnzxu at gmail.com> wrote:
>> Thanks, Dave and Pavan.
>>
>> I tried both hydra and gforker and it appears gforker is easier to
>> use. I can now
>> start two processes in a 2 core ALPHA SMP processor.
>>
>> I used ch3:sock and my program got stuck at beginning of the  
>> simulation
>> Process 0 of 2 is on (none)
>> Process 1 of 2 is on (none)
>> I think this is due to network socket problem with my simulator.
>>
>> Then I switched to ch3:nemesis, I got the following errors:
>> Fatal error in MPI_Init: Other MPI error, error stack:
>> MPIR_Init_thread(394).....: Initialization failed
>> MPID_Init(135)............: channel initialization failed
>> MPIDI_CH3_Init(43)........:
>> MPID_nem_init(202)........:
>> MPIDI_CH3I_Seg_commit(346): generic failure with errno = 16
>> [0]0:Return code = 1
>> [0]1:Return code = 0, signaled with Interrupt
>>
>> At this point, all I need is a SMP MPI calls with shared memory, so I
>> would really
>> only need ch3:shm, but I got the configuration errors trying to
>> configure with ch3:shm
>>
>> checking for sys/sysctl.h... yes
>> checking for x86 mfence instruction using __asm__... configure:  
>> error:
>> in `/home/john/benchmarks/mpich2/mpich2-1.2.1/src/mpid/ch3/channels/ 
>> shm':
>> configure: error: cannot run test program while cross compiling
>> See `config.log' for more details.
>> configure: error: ./configure failed for channels/shm
>> configure: error: Configure of src/mpid/ch3 failed!
>>
>> It appears that there is a configuration error where even I am trying
>> to cross compile for ALPHA, the configuration script
>> is trying to use x86 instructions for shared memory locks.
>>
>> Any idea or recommendations on how to fix this problem?
>>
>> I saw a report in an early post in Nov last year about this as well
>> but did not see any resolution on this.
>>
>> Your help will be greatly appreciated.
>>
>> john
>>
>> On Tue, Feb 23, 2010 at 5:42 PM, Dave Goodell <goodell at mcs.anl.gov>  
>> wrote:
>>> You can try using the hydra process manager.  It doesn't require  
>>> Python:
>>> http://wiki.mcs.anl.gov/mpich2/index.php/Using_the_Hydra_Process_Manager
>>>
>>> -Dave
>>>
>>> On Feb 23, 2010, at 5:25 PM, John Xu wrote:
>>>
>>>> Hi, Dave.
>>>>
>>>> Thanks a lot for the email.
>>>>
>>>> The problem for me is that I had a major problem trying to cross- 
>>>> compile
>>>> Python
>>>> for alpha. Without Python, I can not use the good process managers
>>>> MPICH2 provides.
>>>>
>>>> I am trying to see if there is anyway to run a statically compiled
>>>> binary in a standalone
>>>> fashion. According to Jayesh, it is possible. But I can not
>>>> successfully run it myself.
>>>>
>>>> I am still wondering if there is any possibility to do this ...
>>>>
>>>> thanks,
>>>> john
>>>>
>>>> On Tue, Feb 23, 2010 at 5:17 PM, Dave Goodell  
>>>> <goodell at mcs.anl.gov> wrote:
>>>>>
>>>>> Basically, this won't work.
>>>>>
>>>>> The process manager plays a necessary role in starting the MPI  
>>>>> processes
>>>>> and
>>>>> bootstrapping their communication.  It is extremely unlikely  
>>>>> that you
>>>>> will
>>>>> be able to make something sensible work by running the processes  
>>>>> yourself
>>>>> and just twiddling a few environment variables (for more than a  
>>>>> single
>>>>> process, anyway).
>>>>>
>>>>> -Dave
>>>>>
>>>>> On Feb 23, 2010, at 5:06 PM, John Xu wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> I ended up trying to setup the environment manually myself and  
>>>>>> got the
>>>>>> program to run
>>>>>> without mpiexec.
>>>>>> However, the process I kicked off does not seems to reflect the  
>>>>>> number
>>>>>> of processes
>>>>>> I requested.
>>>>>>
>>>>>> For example, I set PMI_RANK = 0 and PMI_SIZE=2,
>>>>>> but when I launched cpi, I got the following:
>>>>>>
>>>>>> Process 0 of 1 is on (none)
>>>>>> pi is approximately 3.1415926544231332, Error is  
>>>>>> 0.0000000008333401
>>>>>> wall clock time = 0.000976
>>>>>>
>>>>>> The total number of processes is 1 instead of 2.
>>>>>>
>>>>>> Any idea of what environment variable I need to set to get the  
>>>>>> desired
>>>>>> behaviour?
>>>>>>
>>>>>> Thanks,
>>>>>> john
>>>>>>
>>>>>> On Tue, Feb 23, 2010 at 4:25 PM, John Xu <johnzxu at gmail.com>  
>>>>>> wrote:
>>>>>>>
>>>>>>> Hi,
>>>>>>>
>>>>>>> I am trying to start up mpi process in a processor simulator
>>>>>>> environment. So I have the same problem
>>>>>>> as formerly posted in
>>>>>>>
>>>>>>>
>>>>>>>  https://lists.mcs.anl.gov/mailman/htdig/mpich-discuss/2009-November/006008.html
>>>>>>>
>>>>>>> Jayesh,
>>>>>>>
>>>>>>> You indicated that there is a way similar to the windows  
>>>>>>> debugging by
>>>>>>> starting processes at manually.
>>>>>>> But it requires two command prompts.
>>>>>>>
>>>>>>> Does it work for a linux environment running SMP with say 2  
>>>>>>> cores?
>>>>>>>
>>>>>>> Since I only have one dummy term emulated from the simulator,  
>>>>>>> I can
>>>>>>> not get two command prompts.
>>>>>>> Can I setup the environment as in your document and start two
>>>>>>> processes in the background?
>>>>>>>
>>>>>>> thanks,
>>>>>>> john
>>>>>>>
>>>>>>>
>>>>>>> -----------------------------------------------------------
>>>>>>>  Hi,
>>>>>>>  Yes, you can run an MPI program without mpiexec/mpirun. Let  
>>>>>>> us know
>>>>>>> if you have any problems.
>>>>>>>
>>>>>>> Regards,
>>>>>>> Jayesh
>>>>>>> ----- Original Message -----
>>>>>>> From: "junli gu" <gujunli at gmail.com>
>>>>>>> To: mpich-discuss at mcs.anl.gov
>>>>>>> Sent: Friday, November 20, 2009 2:58:30 PM GMT -06:00 US/ 
>>>>>>> Canada Central
>>>>>>> Subject: [mpich-discuss] Can I run MPI program without mpirun/ 
>>>>>>> mpiexec?
>>>>>>>
>>>>>>>
>>>>>>> Hi everyone:
>>>>>>>
>>>>>>> I want to run mpi program like a normal binary without mpirun/ 
>>>>>>> mpiexec
>>>>>>> command, like this: ./mpi_hello . Is this possible?
>>>>>>>
>>>>>>> This is possible only that I can compile mpi program and put  
>>>>>>> all the
>>>>>>> runtime information into a stand alone binary. I don't know if  
>>>>>>> it is
>>>>>>> possible.
>>>>>>>
>>>>>>> Thank you very much!
>>>>>>>
>>>>>>> --
>>>>>>> ************************************************
>>>>>>> Junli Gu--谷俊丽
>>>>>>> Coordinate Science Lab
>>>>>>> University of Illinois at Urbana-Champaign
>>>>>>> ************************************************
>>>>>>>
>>>>>> _______________________________________________
>>>>>> mpich-discuss mailing list
>>>>>> mpich-discuss at mcs.anl.gov
>>>>>> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
>>>>>
>>>>> _______________________________________________
>>>>> mpich-discuss mailing list
>>>>> mpich-discuss at mcs.anl.gov
>>>>> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
>>>>>
>>>> _______________________________________________
>>>> mpich-discuss mailing list
>>>> mpich-discuss at mcs.anl.gov
>>>> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
>>>
>>> _______________________________________________
>>> mpich-discuss mailing list
>>> mpich-discuss at mcs.anl.gov
>>> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
>>>
>>
> _______________________________________________
> mpich-discuss mailing list
> mpich-discuss at mcs.anl.gov
> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss



More information about the mpich-discuss mailing list