[mpich-discuss] message four of enclosed digest.

richard rcq at iamrcq.com
Mon Jul 9 19:23:13 CDT 2012


Sirs: re your message number four: What mpiexec command should I use?
Assigning 8 processes to the i7 core does start all 8 but cpi.exe gives
mpi_comm_world error. richard

-----Original Message-----
From: mpich-discuss-bounces at mcs.anl.gov
[mailto:mpich-discuss-bounces at mcs.anl.gov] On Behalf Of
mpich-discuss-request at mcs.anl.gov
Sent: Monday, July 09, 2012 1:00 PM
To: mpich-discuss at mcs.anl.gov
Subject: mpich-discuss Digest, Vol 46, Issue 12

Send mpich-discuss mailing list submissions to
	mpich-discuss at mcs.anl.gov

To subscribe or unsubscribe via the World Wide Web, visit
	https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
or, via email, send a message with subject or body 'help' to
	mpich-discuss-request at mcs.anl.gov

You can reach the person managing the list at
	mpich-discuss-owner at mcs.anl.gov

When replying, please edit your Subject line so it is more specific than
"Re: Contents of mpich-discuss digest..."


Today's Topics:

   1. Re:  fence vs. lock-unlock (Jeff Hammond)
   2.  unable to connect in 2 windows machines (Sinta Kartika Maharani)
   3. Re:  Configuring MPICH2 in Cygwin (Anthony Chan)
   4. Re:  I7 CORES (Darius Buntinas)


----------------------------------------------------------------------

Message: 1
Date: Mon, 9 Jul 2012 08:17:12 -0500
From: Jeff Hammond <jhammond at alcf.anl.gov>
To: mpich-discuss at mcs.anl.gov
Subject: Re: [mpich-discuss] fence vs. lock-unlock
Message-ID:
	<CAGKz=u+_hkH9re53vHMrcJLGtgiEhZjxfshdk4x7wqAopD87vQ at mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1

If your communication pattern is known in advance and this regular, why are
you using one-sided?  It seems you could use send-recv here quite easily.

Background progress is usually available on supercomputers so if that's your
long-term target, then MPI_Get is a reasonable design choice.

Jeff


On Mon, Jul 9, 2012 at 1:20 AM, Rajeev Thakur <thakur at mcs.anl.gov> wrote:
> That's because by default progress on passive target RMA may need an MPI
function to be called on the target. You can set the environment variable
MPICH_ASYNC_PROGRESS to enable asynchronous progress.
>
> Rajeev
>
> On Jul 9, 2012, at 1:10 AM, Jie Chen wrote:
>
>> I am using one sided communications lock-unlock and see something that I
do not understand.
>>
>> In the normal case (using fence), my code block looks something like this
(rank is my process id, nproc is the total number of processes):
>>
>> for i = 0 to nproc-1
>> -- fence
>> -- MPI_Get something from process (rank+i)%nproc
>> -- fence
>> -- computation
>> end
>>
>> The time line for the above operations looks like the following, which is
completely normal (- means computation, * means communication including
fence and MPI_Get):
>>
>> proc 0: *--------*--------*--------*--------
>> proc 1: *--------*--------*--------*--------
>> proc 2: *--------*--------*--------*--------
>> proc 3: *--------*--------*--------*--------
>>
>> The above illustration shows perfect computation load balance, for
simplicity.
>>
>> However, when I change the two fences by lock and unlock, the time 
>> line looks like the following
>>
>> proc 0: *--------**********--------*--------**********--------
>> proc 1: *--------*--------**********--------*--------
>> proc 2: *--------*******************--------*--------*--------
>> proc 3: *--------*--------**********--------**********--------
>>
>> The problem here is that sometimes the communication takes a very long
time to finish. In particular, this is attributed to the MPI_Win_unlock call
that will not return until the target process has finished one round of
computation.
>>
>> I do not understand why the unlock (or perhaps the actual data transfer)
is so time consuming. The figures here show balanced computational work
load. When the work load is not balanced, I thought the lock/unlock
mechanism was better than using fence because it avoids barriers. But
according to experiments, it appears that having barriers is better than
none. Is this caused by the implementation of MPI_Win_unlock or the
hardware?
>>
>>
>>
>> --
>> Jie Chen
>> Mathematics and Computer Science Division Argonne National Laboratory
>> Address: 9700 S Cass Ave, Bldg 240, Lemont, IL 60439
>> Phone: (630) 252-3313
>> Email: jiechen at mcs.anl.gov
>> Homepage: http://www.mcs.anl.gov/~jiechen 
>> _______________________________________________
>> mpich-discuss mailing list     mpich-discuss at mcs.anl.gov
>> To manage subscription options or unsubscribe:
>> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
>
> _______________________________________________
> mpich-discuss mailing list     mpich-discuss at mcs.anl.gov
> To manage subscription options or unsubscribe:
> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss



--
Jeff Hammond
Argonne Leadership Computing Facility
University of Chicago Computation Institute jhammond at alcf.anl.gov / (630)
252-5381 http://www.linkedin.com/in/jeffhammond
https://wiki.alcf.anl.gov/parts/index.php/User:Jhammond


------------------------------

Message: 2
Date: Mon, 9 Jul 2012 21:23:12 +0700
From: Sinta Kartika Maharani <sintakm114080010 at gmail.com>
To: mpich-discuss at mcs.anl.gov
Subject: [mpich-discuss] unable to connect in 2 windows machines
Message-ID:
	<CAC66hFHPhcVrZEEkG86dq9jfcUbQK8kcX-sFM6m=aCjhUD_rmw at mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1

Hi. I'm sorry if I'm asking too many times. :)

I'm getting problem when trying to executes a program in 2 windows
machines. I've following the rules below :
1. The both machines have the same username and password.
2. Both machines have the same version of mpich2, 1.4.1p1
3. The ping works fine (the ip numbers : 10.222.233.223 and 10.222.233.189)
4. no firewalls in both machines.
5. The registering and validating process is successful.
6. from 10.222.233.223
C:\Program Files\MPICH2\bin>mpiexec -hosts 2 10.222.233.223
10.222.233.189 hostname
sinta-PC
sinta-PC
from 10.222.233.189
C:\Program Files\MPICH2\bin>mpiexec -hosts 2 10.222.233.223
10.222.233.189 hostname
sinta-PC
sinta-PC
7. the domain and username : sinta-pc/sinta

But when try executing :

C:\Program Files\MPICH2\bin>mpiexec -hosts 2 10.222.233.223
10.222.233.189 konvensional random 4
Fatal error in MPI_Init: Other MPI error, error stack:
MPIR_Init_thread(392).................:
MPID_Init(139)........................: channel initialization failed
MPIDI_CH3_Init(38)....................:
MPID_nem_init(196)....................:
MPIDI_CH3I_Seg_commit(366)............:
MPIU_SHMW_Hnd_deserialize(324)........:
MPIU_SHMW_Seg_open(863)...............:
MPIU_SHMW_Seg_create_attach_templ(763): unable to allocate shared
memory - OpenFileMapping The system cannot find the file specified.


job aborted:
rank: node: exit code[: error message]
0: 10.222.233.223: 123
1: 10.222.233.189: 1: process 1 exited without calling finalize

any suggestions?
Thanks anyway


------------------------------

Message: 3
Date: Mon, 9 Jul 2012 09:42:13 -0500 (CDT)
From: Anthony Chan <chan at mcs.anl.gov>
To: Chen Shapira <chen2600 at gmail.com>
Cc: mpich-discuss at mcs.anl.gov
Subject: Re: [mpich-discuss] Configuring MPICH2 in Cygwin
Message-ID:
	<1044064314.36625.1341844933520.JavaMail.root at zimbra-mb2.anl.gov>
Content-Type: text/plain; charset=utf-8


config.log shows that the test went through OK but hung afterward.
Could you try 1.4.1p1 ? and send us the configure output (c.txt
as stated in MPICH2's README) and config.log.

A.Chan
----- Original Message -----
> Attached..
> Chen
> 
> On Sun, Jul 8, 2012 at 7:49 PM, Anthony Chan <chan at mcs.anl.gov> wrote:
> 
> >
> > Send us the config.log that was generated by configure.
> >
> > A.Chan
> >
> > ----- Original Message -----
> > > Hi Everyone,
> > >
> > > I'm building MPICH2 1.4.1 under Cygwin,
> > > I've downloaded the source, and I'm running the following
> > > configure
> > > command
> > > :
> > > ./configure --prefix=/home/mpich-install --disable-fc
> > > --disable-f77
> > >
> > > The problem is , the configure script cannot continue after
> > > showing
> > > the
> > > following line :
> > > "checking for ANSI C header files..."
> > >
> > > Please let me know if encountered this and how I can solve this,
> > > Thank you!
> > > Chen
> > >
> > > _______________________________________________
> > > mpich-discuss mailing list mpich-discuss at mcs.anl.gov
> > > To manage subscription options or unsubscribe:
> > > https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
> > _______________________________________________
> > mpich-discuss mailing list mpich-discuss at mcs.anl.gov
> > To manage subscription options or unsubscribe:
> > https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
> >


------------------------------

Message: 4
Date: Mon, 9 Jul 2012 11:12:37 -0500
From: Darius Buntinas <buntinas at mcs.anl.gov>
To: mpich-discuss at mcs.anl.gov
Subject: Re: [mpich-discuss] I7 CORES
Message-ID: <C8871425-E989-4897-86C1-BDC6477A1246 at mcs.anl.gov>
Content-Type: text/plain; charset=us-ascii


On Jul 8, 2012, at 4:06 AM, richard wrote:

> q. Should mpich2 1.4.1p1 run on an intel(R) core(TM) i7-3610QM 2.3 ghz
processor with turbo-boost? Mine will not. I have 16gb ram, windows 7
professional 64bit op sys. The other machine I am running in parallel is an
intel(R) core(TM)2 E7500.

Yes, it should.

> Also, what do you mean by an unstable version such as mpich2 1.5.1p1?

There's no 1.5.1p1, but there is 1.5b2, which is a preview release.  Preview
releases may have bugs and other stability issues and are intended to be
used by vendors and package maintainers for testing.

-d


> richard
> _______________________________________________
> mpich-discuss mailing list     mpich-discuss at mcs.anl.gov
> To manage subscription options or unsubscribe:
> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss



------------------------------

_______________________________________________
mpich-discuss mailing list
mpich-discuss at mcs.anl.gov
https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss


End of mpich-discuss Digest, Vol 46, Issue 12
*********************************************



More information about the mpich-discuss mailing list