<div>Hello Rajeev,</div><div><br></div>The whole code is bit large, and the code is from graph500 benchmark. the bfs_one_sided.c. I am trying to transform it a bit. here is a portion of the code,<div><br></div><div><div><span class="Apple-tab-span" style="white-space:pre"> </span>MPI_Win_fence(MPI_MODE_NOSUCCEED, queue2_win);</div>
<div><span class="Apple-tab-span" style="white-space:pre"> </span></div><div><span class="Apple-tab-span" style="white-space:pre"> </span>int ii=0,jj,count=1;</div><div><span class="Apple-tab-span" style="white-space:pre"> </span> acc_queue2_win_MPI_BOR_data[ii] = masks[VERTEX_LOCAL(w)/elts_per_queue_bit%ulong_bits];</div>
<div><span class="Apple-tab-span" style="white-space:pre"> </span> acc_queue2_win_MPI_BOR_disp[ii] = VERTEX_LOCAL(w)/elts_per_queue_bit/ulong_bits;</div><div><span class="Apple-tab-span" style="white-space:pre"> </span> acc_queue2_win_MPI_BOR_target[ii] = VERTEX_OWNER(w);</div>
<div><br></div><div><span class="Apple-tab-span" style="white-space:pre"> </span>MPI_Datatype target_type;</div><div><span class="Apple-tab-span" style="white-space:pre"> </span>MPI_Type_indexed( count, blength , &acc_queue2_win_MPI_BOR_disp[ii], MPI_UNSIGNED_LONG, &target_type);</div>
<div><span class="Apple-tab-span" style="white-space:pre"> </span>MPI_Type_commit(&target_type);</div><div><span class="Apple-tab-span" style="white-space:pre"> </span>int dest = acc_queue2_win_MPI_BOR_target[ii];</div>
<div><span class="Apple-tab-span" style="white-space:pre"> </span>MPI_Win_lock(MPI_LOCK_EXCLUSIVE, dest, 0, queue2_win );</div><div><br></div><div><span class="Apple-tab-span" style="white-space:pre"> </span> MPI_Accumulate(&acc_queue2_win_MPI_BOR_data[ii], count, MPI_UNSIGNED_LONG, dest, 0, 1,target_type, MPI_BOR, queue2_win);</div>
<div><br></div><div><br></div><div> MPI_Win_unlock(dest, queue2_win );</div><div><span class="Apple-tab-span" style="white-space:pre"> </span>MPI_Type_free(&target_type);</div><div><br></div><div><span class="Apple-tab-span" style="white-space:pre"> </span>MPI_Win_fence(MPI_MODE_NOSUCCEED, queue2_win);</div>
<div><br></div><div><br></div><div>Let me know if it works.</div><div><br></div><div>Thanks</div><div>Ziaul<br><br><div class="gmail_quote">On Tue, May 29, 2012 at 4:13 PM, Rajeev Thakur <span dir="ltr"><<a href="mailto:thakur@mcs.anl.gov" target="_blank">thakur@mcs.anl.gov</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Can you send the complete program if it is small.<br>
<br>
Rajeev<br>
<div><div class="h5"><br>
On May 29, 2012, at 2:54 PM, Ziaul Haque Olive wrote:<br>
<br>
> for smaller number of processes like 4, i was getting correct result, but for 8, it was providing incorrect result.<br>
><br>
> I tried with and without lock/unlock. without lock/unlock provides correct result all the time.<br>
><br>
> Hello,<br>
><br>
> I am getting incorrect result while using lock/unlock synchronization inside fence. the pattern is as follows,<br>
><br>
> MPI_Win_fence(win1);<br>
> ..........<br>
> MPI_Win_lock(exclusive, win1);<br>
><br>
> MPI_Accumulate(MPI_BOR, win1);<br>
><br>
> MPI_Win_unlock(win1);<br>
><br>
> MPI_Win_fence(win1);<br>
><br>
> is it invalid to use lock in this way?<br>
><br>
> Thanks,<br>
> Ziaul.<br>
><br>
</div></div>> _______________________________________________<br>
> mpich-discuss mailing list <a href="mailto:mpich-discuss@mcs.anl.gov">mpich-discuss@mcs.anl.gov</a><br>
> To manage subscription options or unsubscribe:<br>
> <a href="https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss" target="_blank">https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss</a><br>
<br>
_______________________________________________<br>
mpich-discuss mailing list <a href="mailto:mpich-discuss@mcs.anl.gov">mpich-discuss@mcs.anl.gov</a><br>
To manage subscription options or unsubscribe:<br>
<a href="https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss" target="_blank">https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss</a><br>
</blockquote></div><br></div></div>