[MPICH] any way to ask nemesis to turn-off and turn of active polling ?
Eric A. Borisch
eborisch at ieee.org
Mon Dec 17 11:49:44 CST 2007
Tan,
You may not be able to do it directly (See Darius's reply) but you
could always use MPI_Isend/MPI_Irecv and MPI_Test along with your own
delay interval achieve the same end -- as long as you're willing to
have a slight (by your own delay interval) overhead before the fast
nodes continue...
<pseudo-code>
MPI_Request syncRequest[MAX_NODES]; /* Arrays in some fashion */
MPI_Status syncStatus[MAX_NODES];
if (myNode == slowNode) {
j=0;
for (i=0; i < mpiNodes; i++) if (i != myNode)
MPI_Isend(&i,1,MPI_INT,i,42,MPI_COMM_WORLD,&syncRequest[j++]);
MPI_Waitall(j,syncRequest,syncStatus); /* At this point you're
"ready to go" ... might as well poll */
} else {
MPI_Irecv(&i,1,MPI_INT,i,42,MPI_COMM_WORLD,& syncRequest);
j =0;
while(!j) {
MPI_Test(&syncRequest,&j,&syncStatus); /* MPI_Wait would poll
until completion */
if (!j) usleep(deadTime); /* Don't want to wait again if it is complete */
}
}
<pseudo-code>
You can even get fancy and have a growing delay within the while..
polls fast initially and then falls off to some slower rate.
Eric
On Dec 17, 2007 10:02 AM, Darius Buntinas <buntinas at mcs.anl.gov> wrote:
>
> No, there's no way to do that. Even MPI_Barrier will do active polling.
>
> Are you having issues where an MPI process that is waiting in a blocking
> call is taking CPU time away from other processes?
>
> -d
>
>
> On 12/14/2007 04:53 PM, chong tan wrote:
> > My issue is like this :
> >
> > among all the processess, some will get to the point of first MPI
> > communication points faster than
> > than other. Is there a way that I tell nemesis to start without doing
> > active polling, and then turn
> > on active polling with some function ?
> >
> > Or should I just use MPI_Barrier() on that ?
> >
> > thanks
> > tan
> >
> >
> > ------------------------------------------------------------------------
> > Be a better friend, newshound, and know-it-all with Yahoo! Mobile. Try
> > it now.
> > <http://us.rd.yahoo.com/evt=51733/*http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ
> > >
>
>
--
Eric A. Borisch
eborisch at ieee.org
More information about the mpich-discuss
mailing list