[mpich-discuss] One-sided communication: MPI_Win_lock inside MPI_Win_fence
Ziaul Haque Olive
mzh.olive at gmail.com
Tue May 29 21:13:28 CDT 2012
Thanks Jim,
I am now thinking of changing my code little bit, so that I can avoid using
lock/unlock. lets see what happens.
-Ziaul
On Tue, May 29, 2012 at 8:59 PM, Jim Dinan <dinan at mcs.anl.gov> wrote:
> Hi Ziaul,
>
> Your use of lock/unlock sounds correct. Can you use a barrier instead
> of the fence to be sure that all data has arrived?
>
> ~Jim.
>
> On 05/29/2012 05:33 PM, Ziaul Haque Olive wrote:
> > Hello Jim,
> >
> > I was coalescing data communication. data is collected in a buffer
> > instead of sending. let say, size of this temporary buffer size is N.
> > whenever the buffer is filled up, I need to send all the data from
> > buffer, so that it can be reused in later iterations. but, without any
> > synchronization it is not possible to be sure that the buffer is free to
> > reuse. so I used lock/unlock. moreover, it is required at the start of
> > each iteration that all the data transfer from remote processes to local
> > window has been finished. So the fence is used.
> >
> > the MPI_MODE_NOSUCCEED in the first fence was a typo. it is not
> > present in the original code.
> >
> > Thanks,
> > Ziaul
> >
> > On Tue, May 29, 2012 at 5:11 PM, Jim Dinan <dinan at mcs.anl.gov
> > <mailto:dinan at mcs.anl.gov>> wrote:
> >
> > Hi Ziaul,
> >
> > MPI_MODE_NOSUCCEED is incorrect in the first call to fence since RMA
> > operations to succeed this synchronization call. Hopefully this is
> > just a typo and you meant MPI_MODE_NOPRECEDE.
> >
> > In terms of the datatypes, the layout at the origin and target need
> > not be the same. However, the basic unit types and the total number
> > of elements must match for accumulate.
> >
> > In the example given below, you shouldn't need lock/unlock in
> > addition to the fences. Can you refine this a little more to
> > capture why this is needed in your application?
> >
> > Best,
> > ~Jim.
> >
> >
> > On 5/29/12 4:43 PM, Ziaul Haque Olive wrote:
> >
> > Hello Rajeev,
> >
> > yes, there is a reason for my program. the I sent you was a
> > simplified version. the original one is little bit different.
> > but this
> > code sometimes work correctly, sometimes do not.
> >
> > I have another question, about indexed data type and
> MPI_Accumulate,
> >
> > if the indexed data-type for target process contains indexes
> like,
> >
> > 2, 5, 3, 1 -> out of order
> > or
> > 2, 4, 5, 4, 2, 2 - out of order and repetition
> > or
> > 2, 3, 3, 3, 5 -> in order and repetition
> >
> > would these be a problem?
> >
> > Thanks,
> > Ziaul
> >
> > On Tue, May 29, 2012 at 4:32 PM, Rajeev Thakur
> > <thakur at mcs.anl.gov <mailto:thakur at mcs.anl.gov>
> > <mailto:thakur at mcs.anl.gov <mailto:thakur at mcs.anl.gov>>> wrote:
> >
> > Nesting of synchronization epochs is not allowed. Is there a
> > reason
> > to do it this way?
> >
> > Rajeev
> >
> > On May 29, 2012, at 4:28 PM, Ziaul Haque Olive wrote:
> >
> > > Hello Rajeev,
> > >
> > > The whole code is bit large, and the code is from graph500
> > benchmark. the bfs_one_sided.c. I am trying to transform it a
> > bit.
> > here is a portion of the code,
> > >
> > > MPI_Win_fence(MPI_MODE___NOSUCCEED, queue2_win);
> > >
> > > int ii=0,jj,count=1;
> > > acc_queue2_win_MPI_BOR_data[__ii] =
> > masks[VERTEX_LOCAL(w)/elts___per_queue_bit%ulong_bits];
> > > acc_queue2_win_MPI_BOR_disp[__ii] =
> > VERTEX_LOCAL(w)/elts_per___queue_bit/ulong_bits;
> > > acc_queue2_win_MPI_BOR_target[__ii] =
> VERTEX_OWNER(w);
> > >
> > > MPI_Datatype target_type;
> > > MPI_Type_indexed( count, blength ,
> > &acc_queue2_win_MPI_BOR_disp[__ii], MPI_UNSIGNED_LONG,
> > &target_type);
> > > MPI_Type_commit(&target_type);
> > > int dest = acc_queue2_win_MPI_BOR_target[__ii];
> > > MPI_Win_lock(MPI_LOCK___EXCLUSIVE, dest, 0,
> > queue2_win );
> > >
> > > MPI_Accumulate(&acc_queue2___win_MPI_BOR_data[ii],
> > count,
> > MPI_UNSIGNED_LONG, dest, 0, 1,target_type, MPI_BOR,
> queue2_win);
> > >
> > >
> > > MPI_Win_unlock(dest, queue2_win );
> > > MPI_Type_free(&target_type);
> > >
> > > MPI_Win_fence(MPI_MODE___NOSUCCEED, queue2_win);
> > >
> > >
> > > Let me know if it works.
> > >
> > > Thanks
> > > Ziaul
> > >
> > > On Tue, May 29, 2012 at 4:13 PM, Rajeev Thakur
> > <thakur at mcs.anl.gov <mailto:thakur at mcs.anl.gov>
> > <mailto:thakur at mcs.anl.gov <mailto:thakur at mcs.anl.gov>>> wrote:
> > > Can you send the complete program if it is small.
> > >
> > > Rajeev
> > >
> > > On May 29, 2012, at 2:54 PM, Ziaul Haque Olive wrote:
> > >
> > > > for smaller number of processes like 4, i was getting
> > correct
> > result, but for 8, it was providing incorrect result.
> > > >
> > > > I tried with and without lock/unlock. without lock/unlock
> > provides correct result all the time.
> > > >
> > > > Hello,
> > > >
> > > > I am getting incorrect result while using lock/unlock
> > synchronization inside fence. the pattern is as follows,
> > > >
> > > > MPI_Win_fence(win1);
> > > > ..........
> > > > MPI_Win_lock(exclusive, win1);
> > > >
> > > > MPI_Accumulate(MPI_BOR, win1);
> > > >
> > > > MPI_Win_unlock(win1);
> > > >
> > > > MPI_Win_fence(win1);
> > > >
> > > > is it invalid to use lock in this way?
> > > >
> > > > Thanks,
> > > > Ziaul.
> > > >
> > > > _________________________________________________
> > > > mpich-discuss mailing list mpich-discuss at mcs.anl.gov
> > <mailto:mpich-discuss at mcs.anl.gov>
> > <mailto:mpich-discuss at mcs.anl.__gov
> > <mailto:mpich-discuss at mcs.anl.gov>>
> >
> > > > To manage subscription options or unsubscribe:
> > > >
> > https://lists.mcs.anl.gov/__mailman/listinfo/mpich-discuss
> > <https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss>
> > >
> > > _________________________________________________
> > > mpich-discuss mailing list mpich-discuss at mcs.anl.gov
> > <mailto:mpich-discuss at mcs.anl.gov>
> > <mailto:mpich-discuss at mcs.anl.__gov
> > <mailto:mpich-discuss at mcs.anl.gov>>
> >
> > > To manage subscription options or unsubscribe:
> > > https://lists.mcs.anl.gov/__mailman/listinfo/mpich-discuss
> > <https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss>
> > >
> > > _________________________________________________
> > > mpich-discuss mailing list mpich-discuss at mcs.anl.gov
> > <mailto:mpich-discuss at mcs.anl.gov>
> > <mailto:mpich-discuss at mcs.anl.__gov
> > <mailto:mpich-discuss at mcs.anl.gov>>
> >
> > > To manage subscription options or unsubscribe:
> > > https://lists.mcs.anl.gov/__mailman/listinfo/mpich-discuss
> > <https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss>
> >
> > _________________________________________________
> > mpich-discuss mailing list mpich-discuss at mcs.anl.gov
> > <mailto:mpich-discuss at mcs.anl.gov>
> > <mailto:mpich-discuss at mcs.anl.__gov
> > <mailto:mpich-discuss at mcs.anl.gov>>
> >
> > To manage subscription options or unsubscribe:
> > https://lists.mcs.anl.gov/__mailman/listinfo/mpich-discuss
> > <https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss>
> >
> >
> >
> >
> > _________________________________________________
> > mpich-discuss mailing list mpich-discuss at mcs.anl.gov
> > <mailto:mpich-discuss at mcs.anl.gov>
> > To manage subscription options or unsubscribe:
> > https://lists.mcs.anl.gov/__mailman/listinfo/mpich-discuss
> > <https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss>
> >
> > _________________________________________________
> > mpich-discuss mailing list mpich-discuss at mcs.anl.gov
> > <mailto:mpich-discuss at mcs.anl.gov>
> > To manage subscription options or unsubscribe:
> > https://lists.mcs.anl.gov/__mailman/listinfo/mpich-discuss
> > <https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss>
> >
> >
> >
> >
> > _______________________________________________
> > mpich-discuss mailing list mpich-discuss at mcs.anl.gov
> > To manage subscription options or unsubscribe:
> > https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
>
> _______________________________________________
> mpich-discuss mailing list mpich-discuss at mcs.anl.gov
> To manage subscription options or unsubscribe:
> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/mpich-discuss/attachments/20120529/b0e1dd91/attachment-0001.html>
More information about the mpich-discuss
mailing list