Hello Jim,<div><br></div><div>I was coalescing data communication. data is collected in a buffer instead of sending. let say, size of this temporary buffer size is N. whenever the buffer is filled up, I need to send all the data from buffer, so that it can be reused in later iterations. but, without any synchronization it is not possible to be sure that the buffer is free to reuse. so I used lock/unlock. moreover, it is required at the start of each iteration that all the data transfer from remote processes to local window has been finished. So the fence is used.</div>
<div><br></div><div>the
MPI_MODE_NOSUCCEED in the first fence was a typo. it is not present in the original code.</div><div><br></div><div>Thanks,</div><div>Ziaul<br><br><div class="gmail_quote">On Tue, May 29, 2012 at 5:11 PM, Jim Dinan <span dir="ltr"><<a href="mailto:dinan@mcs.anl.gov" target="_blank">dinan@mcs.anl.gov</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi Ziaul,<br>
<br>
MPI_MODE_NOSUCCEED is incorrect in the first call to fence since RMA operations to succeed this synchronization call. Hopefully this is just a typo and you meant MPI_MODE_NOPRECEDE.<br>
<br>
In terms of the datatypes, the layout at the origin and target need not be the same. However, the basic unit types and the total number of elements must match for accumulate.<br>
<br>
In the example given below, you shouldn't need lock/unlock in addition to the fences. Can you refine this a little more to capture why this is needed in your application?<br>
<br>
Best,<br>
~Jim.<div class="im"><br>
<br>
On 5/29/12 4:43 PM, Ziaul Haque Olive wrote:<br>
</div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Hello Rajeev,<br>
<br><div class="im">
yes, there is a reason for my program. the I sent you was a<br>
simplified version. the original one is little bit different. but this<br>
code sometimes work correctly, sometimes do not.<br>
<br>
I have another question, about indexed data type and MPI_Accumulate,<br>
<br>
if the indexed data-type for target process contains indexes like,<br>
<br>
2, 5, 3, 1 -> out of order<br>
or<br>
2, 4, 5, 4, 2, 2 - out of order and repetition<br>
or<br>
2, 3, 3, 3, 5 -> in order and repetition<br>
<br>
would these be a problem?<br>
<br>
Thanks,<br>
Ziaul<br>
<br>
On Tue, May 29, 2012 at 4:32 PM, Rajeev Thakur <<a href="mailto:thakur@mcs.anl.gov" target="_blank">thakur@mcs.anl.gov</a><br></div><div><div class="h5">
<mailto:<a href="mailto:thakur@mcs.anl.gov" target="_blank">thakur@mcs.anl.gov</a>>> wrote:<br>
<br>
Nesting of synchronization epochs is not allowed. Is there a reason<br>
to do it this way?<br>
<br>
Rajeev<br>
<br>
On May 29, 2012, at 4:28 PM, Ziaul Haque Olive wrote:<br>
<br>
> Hello Rajeev,<br>
><br>
> The whole code is bit large, and the code is from graph500<br>
benchmark. the bfs_one_sided.c. I am trying to transform it a bit.<br>
here is a portion of the code,<br>
><br>
> MPI_Win_fence(MPI_MODE_<u></u>NOSUCCEED, queue2_win);<br>
><br>
> int ii=0,jj,count=1;<br>
> acc_queue2_win_MPI_BOR_data[<u></u>ii] =<br>
masks[VERTEX_LOCAL(w)/elts_<u></u>per_queue_bit%ulong_bits];<br>
> acc_queue2_win_MPI_BOR_disp[<u></u>ii] =<br>
VERTEX_LOCAL(w)/elts_per_<u></u>queue_bit/ulong_bits;<br>
> acc_queue2_win_MPI_BOR_target[<u></u>ii] = VERTEX_OWNER(w);<br>
><br>
> MPI_Datatype target_type;<br>
> MPI_Type_indexed( count, blength ,<br>
&acc_queue2_win_MPI_BOR_disp[<u></u>ii], MPI_UNSIGNED_LONG, &target_type);<br>
> MPI_Type_commit(&target_type);<br>
> int dest = acc_queue2_win_MPI_BOR_target[<u></u>ii];<br>
> MPI_Win_lock(MPI_LOCK_<u></u>EXCLUSIVE, dest, 0, queue2_win );<br>
><br>
> MPI_Accumulate(&acc_queue2_<u></u>win_MPI_BOR_data[ii], count,<br>
MPI_UNSIGNED_LONG, dest, 0, 1,target_type, MPI_BOR, queue2_win);<br>
><br>
><br>
> MPI_Win_unlock(dest, queue2_win );<br>
> MPI_Type_free(&target_type);<br>
><br>
> MPI_Win_fence(MPI_MODE_<u></u>NOSUCCEED, queue2_win);<br>
><br>
><br>
> Let me know if it works.<br>
><br>
> Thanks<br>
> Ziaul<br>
><br>
> On Tue, May 29, 2012 at 4:13 PM, Rajeev Thakur<br></div></div><div class="im">
<<a href="mailto:thakur@mcs.anl.gov" target="_blank">thakur@mcs.anl.gov</a> <mailto:<a href="mailto:thakur@mcs.anl.gov" target="_blank">thakur@mcs.anl.gov</a>>> wrote:<br>
> Can you send the complete program if it is small.<br>
><br>
> Rajeev<br>
><br>
> On May 29, 2012, at 2:54 PM, Ziaul Haque Olive wrote:<br>
><br>
> > for smaller number of processes like 4, i was getting correct<br>
result, but for 8, it was providing incorrect result.<br>
> ><br>
> > I tried with and without lock/unlock. without lock/unlock<br>
provides correct result all the time.<br>
> ><br>
> > Hello,<br>
> ><br>
> > I am getting incorrect result while using lock/unlock<br>
synchronization inside fence. the pattern is as follows,<br>
> ><br>
> > MPI_Win_fence(win1);<br>
> > ..........<br>
> > MPI_Win_lock(exclusive, win1);<br>
> ><br>
> > MPI_Accumulate(MPI_BOR, win1);<br>
> ><br>
> > MPI_Win_unlock(win1);<br>
> ><br>
> > MPI_Win_fence(win1);<br>
> ><br>
> > is it invalid to use lock in this way?<br>
> ><br>
> > Thanks,<br>
> > Ziaul.<br>
> ><br>
> > ______________________________<u></u>_________________<br>
> > mpich-discuss mailing list <a href="mailto:mpich-discuss@mcs.anl.gov" target="_blank">mpich-discuss@mcs.anl.gov</a><br></div>
<mailto:<a href="mailto:mpich-discuss@mcs.anl.gov" target="_blank">mpich-discuss@mcs.anl.<u></u>gov</a>><div class="im"><br>
> > To manage subscription options or unsubscribe:<br>
> > <a href="https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss" target="_blank">https://lists.mcs.anl.gov/<u></u>mailman/listinfo/mpich-discuss</a><br>
><br>
> ______________________________<u></u>_________________<br>
> mpich-discuss mailing list <a href="mailto:mpich-discuss@mcs.anl.gov" target="_blank">mpich-discuss@mcs.anl.gov</a><br></div>
<mailto:<a href="mailto:mpich-discuss@mcs.anl.gov" target="_blank">mpich-discuss@mcs.anl.<u></u>gov</a>><div class="im"><br>
> To manage subscription options or unsubscribe:<br>
> <a href="https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss" target="_blank">https://lists.mcs.anl.gov/<u></u>mailman/listinfo/mpich-discuss</a><br>
><br>
> ______________________________<u></u>_________________<br>
> mpich-discuss mailing list <a href="mailto:mpich-discuss@mcs.anl.gov" target="_blank">mpich-discuss@mcs.anl.gov</a><br></div>
<mailto:<a href="mailto:mpich-discuss@mcs.anl.gov" target="_blank">mpich-discuss@mcs.anl.<u></u>gov</a>><div class="im"><br>
> To manage subscription options or unsubscribe:<br>
> <a href="https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss" target="_blank">https://lists.mcs.anl.gov/<u></u>mailman/listinfo/mpich-discuss</a><br>
<br>
______________________________<u></u>_________________<br>
mpich-discuss mailing list <a href="mailto:mpich-discuss@mcs.anl.gov" target="_blank">mpich-discuss@mcs.anl.gov</a><br></div>
<mailto:<a href="mailto:mpich-discuss@mcs.anl.gov" target="_blank">mpich-discuss@mcs.anl.<u></u>gov</a>><div class="im"><br>
To manage subscription options or unsubscribe:<br>
<a href="https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss" target="_blank">https://lists.mcs.anl.gov/<u></u>mailman/listinfo/mpich-discuss</a><br>
<br>
<br>
<br>
<br>
______________________________<u></u>_________________<br>
mpich-discuss mailing list <a href="mailto:mpich-discuss@mcs.anl.gov" target="_blank">mpich-discuss@mcs.anl.gov</a><br>
To manage subscription options or unsubscribe:<br>
<a href="https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss" target="_blank">https://lists.mcs.anl.gov/<u></u>mailman/listinfo/mpich-discuss</a><br>
</div></blockquote><div class="HOEnZb"><div class="h5">
______________________________<u></u>_________________<br>
mpich-discuss mailing list <a href="mailto:mpich-discuss@mcs.anl.gov" target="_blank">mpich-discuss@mcs.anl.gov</a><br>
To manage subscription options or unsubscribe:<br>
<a href="https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss" target="_blank">https://lists.mcs.anl.gov/<u></u>mailman/listinfo/mpich-discuss</a><br>
</div></div></blockquote></div><br></div>