[mpich-discuss] One-sided communication: MPI_Win_lock inside MPI_Win_fence

Ziaul Haque Olive mzh.olive at gmail.com
Tue May 29 16:43:48 CDT 2012


Hello Rajeev,

    yes, there is a reason for my program. the I sent you was a simplified
version. the original one is little bit different. but this code sometimes
work correctly, sometimes do not.

I have another question, about indexed data type and MPI_Accumulate,

if the indexed data-type for target process contains indexes like,

       2, 5, 3, 1 -> out of order
or
      2, 4, 5, 4, 2, 2 - out of order and repetition
or
       2, 3, 3, 3, 5 -> in order and repetition

would these be a problem?

Thanks,
Ziaul

On Tue, May 29, 2012 at 4:32 PM, Rajeev Thakur <thakur at mcs.anl.gov> wrote:

> Nesting of synchronization epochs is not allowed. Is there a reason to do
> it this way?
>
> Rajeev
>
> On May 29, 2012, at 4:28 PM, Ziaul Haque Olive wrote:
>
> > Hello Rajeev,
> >
> > The whole code is bit large, and the code is from graph500 benchmark.
> the bfs_one_sided.c. I am trying to transform it a bit. here is a portion
> of the code,
> >
> >       MPI_Win_fence(MPI_MODE_NOSUCCEED, queue2_win);
> >
> >       int ii=0,jj,count=1;
> >        acc_queue2_win_MPI_BOR_data[ii] =
> masks[VERTEX_LOCAL(w)/elts_per_queue_bit%ulong_bits];
> >        acc_queue2_win_MPI_BOR_disp[ii] =
> VERTEX_LOCAL(w)/elts_per_queue_bit/ulong_bits;
> >        acc_queue2_win_MPI_BOR_target[ii] = VERTEX_OWNER(w);
> >
> >       MPI_Datatype target_type;
> >       MPI_Type_indexed( count, blength ,
> &acc_queue2_win_MPI_BOR_disp[ii], MPI_UNSIGNED_LONG, &target_type);
> >       MPI_Type_commit(&target_type);
> >       int dest =  acc_queue2_win_MPI_BOR_target[ii];
> >       MPI_Win_lock(MPI_LOCK_EXCLUSIVE, dest, 0, queue2_win );
> >
> >         MPI_Accumulate(&acc_queue2_win_MPI_BOR_data[ii], count,
> MPI_UNSIGNED_LONG, dest, 0, 1,target_type, MPI_BOR, queue2_win);
> >
> >
> >     MPI_Win_unlock(dest, queue2_win );
> >       MPI_Type_free(&target_type);
> >
> >       MPI_Win_fence(MPI_MODE_NOSUCCEED, queue2_win);
> >
> >
> > Let me know if it works.
> >
> > Thanks
> > Ziaul
> >
> > On Tue, May 29, 2012 at 4:13 PM, Rajeev Thakur <thakur at mcs.anl.gov>
> wrote:
> > Can you send the complete program if it is small.
> >
> > Rajeev
> >
> > On May 29, 2012, at 2:54 PM, Ziaul Haque Olive wrote:
> >
> > > for smaller number of processes like 4, i was getting correct result,
> but for 8, it was providing incorrect result.
> > >
> > > I tried with and without lock/unlock. without lock/unlock provides
> correct result all the time.
> > >
> > > Hello,
> > >
> > > I am getting incorrect result while using lock/unlock synchronization
> inside fence. the pattern is as follows,
> > >
> > >           MPI_Win_fence(win1);
> > >               ..........
> > >           MPI_Win_lock(exclusive, win1);
> > >
> > >           MPI_Accumulate(MPI_BOR, win1);
> > >
> > >           MPI_Win_unlock(win1);
> > >
> > >           MPI_Win_fence(win1);
> > >
> > > is it invalid to use lock in this way?
> > >
> > > Thanks,
> > > Ziaul.
> > >
> > > _______________________________________________
> > > mpich-discuss mailing list     mpich-discuss at mcs.anl.gov
> > > To manage subscription options or unsubscribe:
> > > https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
> >
> > _______________________________________________
> > mpich-discuss mailing list     mpich-discuss at mcs.anl.gov
> > To manage subscription options or unsubscribe:
> > https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
> >
> > _______________________________________________
> > mpich-discuss mailing list     mpich-discuss at mcs.anl.gov
> > To manage subscription options or unsubscribe:
> > https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
>
> _______________________________________________
> mpich-discuss mailing list     mpich-discuss at mcs.anl.gov
> To manage subscription options or unsubscribe:
> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/mpich-discuss/attachments/20120529/2d750542/attachment-0001.html>


More information about the mpich-discuss mailing list