[mpich-discuss] lock/unlock regions and process synchronization

Rajeev Thakur thakur at mcs.anl.gov
Thu Jul 10 16:39:09 CDT 2008


Yes, this paper describes how to implement mutex locks using MPI one-sided
communication: 
http://www-unix.mcs.anl.gov/~thakur/papers/atomic-mode.pdf . I can send you
the code if you like.

Rajeev

> -----Original Message-----
> From: owner-mpich-discuss at mcs.anl.gov 
> [mailto:owner-mpich-discuss at mcs.anl.gov] On Behalf Of Robert Kubrick
> Sent: Thursday, July 10, 2008 4:29 PM
> To: mpich-discuss at mcs.anl.gov
> Subject: Re: [mpich-discuss] lock/unlock regions and process 
> synchronization
> 
> I checked the code in the examples, I understand MPI_Accumulate/ 
> MPI_Get is the key to achieve read/write atomicity but I actually  
> need to perform a set of operations on the window, including random  
> assignments. Is there a way to simulate a semaphore through MPI-2  
> generalized requests?
> 
> Rob.
> 
> On Jul 10, 2008, at 4:53 PM, Rajeev Thakur wrote:
> 
> > You could use some out-of-band synchronization like that to achieve
> > atomicity.
> >
> > Rajeev
> >
> >> -----Original Message-----
> >> From: owner-mpich-discuss at mcs.anl.gov
> >> [mailto:owner-mpich-discuss at mcs.anl.gov] On Behalf Of 
> Robert Kubrick
> >> Sent: Thursday, July 10, 2008 3:46 PM
> >> To: mpich-discuss at mcs.anl.gov
> >> Subject: Re: [mpich-discuss] lock/unlock regions and process
> >> synchronization
> >>
> >> Thanks Rajeev, I already have the book in the mail. On a related
> >> note, is there any potential issue using SYSV semaphores with one-
> >> sided communications? While the active task is completing 
> the set of
> >> instructions, all the other slave processes could wait on a  
> >> semaphore.
> >>
> >> On Jul 10, 2008, at 4:11 PM, Rajeev Thakur wrote:
> >>
> >>> What you are trying to do is an atomic read-modify-write,
> >> which is not
> >>> trivial to do with the MPI-2 one-sided operations. There
> >> are two ways
> >>> described in the book, Using MPI-2. One is easier to understand
> >>> than the
> >>> other one, but is less scalable. The source code for both is
> >>> available in
> >>> the MPICH2 distribution. See fetchandadd.c and 
> fetchandadd_tree.c in
> >>> test/mpi/rma.
> >>>
> >>> Rajeev
> >>>
> >>>> -----Original Message-----
> >>>> From: owner-mpich-discuss at mcs.anl.gov
> >>>> [mailto:owner-mpich-discuss at mcs.anl.gov] On Behalf Of
> >> Robert Kubrick
> >>>> Sent: Thursday, July 10, 2008 3:04 PM
> >>>> To: mpich-discuss at mcs.anl.gov
> >>>> Subject: [mpich-discuss] lock/unlock regions and process
> >>>> synchronization
> >>>>
> >>>> I need to access a one-sided window region in a master
> >> from multiple
> >>>> slave processes. Each slave needs to read the window 
> contents, then
> >>>> update the same window area:
> >>>>
> >>>> MPI_Win_lock(...);
> >>>> MPI_Get(&idx, ...);
> >>>> MPI_Win_unlock(...);
> >>>>
> >>>> // Got idx, check value
> >>>> if( idx > 10 ) {
> >>>>    idx += 5;
> >>>>    MPI_Win_lock(...);
> >>>>    MPI_Put(&idx);
> >>>>    MPI_Win_unlock(...);
> >>>> }
> >>>>
> >>>> How can I make sure another process does not access 'idx'
> >>>> between the
> >>>> two lock/unlock regions in the example?
> >>>>
> >>>>
> >>>
> >>
> >>
> >
> 
> 




More information about the mpich-discuss mailing list