[mpich-discuss] One-sided communication: MPI_Win_lock inside MPI_Win_fence
Jim Dinan
dinan at mcs.anl.gov
Tue May 29 17:11:15 CDT 2012
Hi Ziaul,
MPI_MODE_NOSUCCEED is incorrect in the first call to fence since RMA
operations to succeed this synchronization call. Hopefully this is just
a typo and you meant MPI_MODE_NOPRECEDE.
In terms of the datatypes, the layout at the origin and target need not
be the same. However, the basic unit types and the total number of
elements must match for accumulate.
In the example given below, you shouldn't need lock/unlock in addition
to the fences. Can you refine this a little more to capture why this is
needed in your application?
Best,
~Jim.
On 5/29/12 4:43 PM, Ziaul Haque Olive wrote:
> Hello Rajeev,
>
> yes, there is a reason for my program. the I sent you was a
> simplified version. the original one is little bit different. but this
> code sometimes work correctly, sometimes do not.
>
> I have another question, about indexed data type and MPI_Accumulate,
>
> if the indexed data-type for target process contains indexes like,
>
> 2, 5, 3, 1 -> out of order
> or
> 2, 4, 5, 4, 2, 2 - out of order and repetition
> or
> 2, 3, 3, 3, 5 -> in order and repetition
>
> would these be a problem?
>
> Thanks,
> Ziaul
>
> On Tue, May 29, 2012 at 4:32 PM, Rajeev Thakur <thakur at mcs.anl.gov
> <mailto:thakur at mcs.anl.gov>> wrote:
>
> Nesting of synchronization epochs is not allowed. Is there a reason
> to do it this way?
>
> Rajeev
>
> On May 29, 2012, at 4:28 PM, Ziaul Haque Olive wrote:
>
> > Hello Rajeev,
> >
> > The whole code is bit large, and the code is from graph500
> benchmark. the bfs_one_sided.c. I am trying to transform it a bit.
> here is a portion of the code,
> >
> > MPI_Win_fence(MPI_MODE_NOSUCCEED, queue2_win);
> >
> > int ii=0,jj,count=1;
> > acc_queue2_win_MPI_BOR_data[ii] =
> masks[VERTEX_LOCAL(w)/elts_per_queue_bit%ulong_bits];
> > acc_queue2_win_MPI_BOR_disp[ii] =
> VERTEX_LOCAL(w)/elts_per_queue_bit/ulong_bits;
> > acc_queue2_win_MPI_BOR_target[ii] = VERTEX_OWNER(w);
> >
> > MPI_Datatype target_type;
> > MPI_Type_indexed( count, blength ,
> &acc_queue2_win_MPI_BOR_disp[ii], MPI_UNSIGNED_LONG, &target_type);
> > MPI_Type_commit(&target_type);
> > int dest = acc_queue2_win_MPI_BOR_target[ii];
> > MPI_Win_lock(MPI_LOCK_EXCLUSIVE, dest, 0, queue2_win );
> >
> > MPI_Accumulate(&acc_queue2_win_MPI_BOR_data[ii], count,
> MPI_UNSIGNED_LONG, dest, 0, 1,target_type, MPI_BOR, queue2_win);
> >
> >
> > MPI_Win_unlock(dest, queue2_win );
> > MPI_Type_free(&target_type);
> >
> > MPI_Win_fence(MPI_MODE_NOSUCCEED, queue2_win);
> >
> >
> > Let me know if it works.
> >
> > Thanks
> > Ziaul
> >
> > On Tue, May 29, 2012 at 4:13 PM, Rajeev Thakur
> <thakur at mcs.anl.gov <mailto:thakur at mcs.anl.gov>> wrote:
> > Can you send the complete program if it is small.
> >
> > Rajeev
> >
> > On May 29, 2012, at 2:54 PM, Ziaul Haque Olive wrote:
> >
> > > for smaller number of processes like 4, i was getting correct
> result, but for 8, it was providing incorrect result.
> > >
> > > I tried with and without lock/unlock. without lock/unlock
> provides correct result all the time.
> > >
> > > Hello,
> > >
> > > I am getting incorrect result while using lock/unlock
> synchronization inside fence. the pattern is as follows,
> > >
> > > MPI_Win_fence(win1);
> > > ..........
> > > MPI_Win_lock(exclusive, win1);
> > >
> > > MPI_Accumulate(MPI_BOR, win1);
> > >
> > > MPI_Win_unlock(win1);
> > >
> > > MPI_Win_fence(win1);
> > >
> > > is it invalid to use lock in this way?
> > >
> > > Thanks,
> > > Ziaul.
> > >
> > > _______________________________________________
> > > mpich-discuss mailing list mpich-discuss at mcs.anl.gov
> <mailto:mpich-discuss at mcs.anl.gov>
> > > To manage subscription options or unsubscribe:
> > > https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
> >
> > _______________________________________________
> > mpich-discuss mailing list mpich-discuss at mcs.anl.gov
> <mailto:mpich-discuss at mcs.anl.gov>
> > To manage subscription options or unsubscribe:
> > https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
> >
> > _______________________________________________
> > mpich-discuss mailing list mpich-discuss at mcs.anl.gov
> <mailto:mpich-discuss at mcs.anl.gov>
> > To manage subscription options or unsubscribe:
> > https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
>
> _______________________________________________
> mpich-discuss mailing list mpich-discuss at mcs.anl.gov
> <mailto:mpich-discuss at mcs.anl.gov>
> To manage subscription options or unsubscribe:
> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
>
>
>
>
> _______________________________________________
> mpich-discuss mailing list mpich-discuss at mcs.anl.gov
> To manage subscription options or unsubscribe:
> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
More information about the mpich-discuss
mailing list