<div dir="ltr"><div dir="ltr"><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Mon, 21 Nov 2022 at 12:37, Matthew Knepley <<a href="mailto:knepley@gmail.com">knepley@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><div dir="ltr">In data_ex.c, Swarm uses a distinct tag for each pair of processes. If the number of processes exceeds 1024, there are > 1024^2 tags which exceeds MPI_TAG_UB on Intel MPI.<div><br></div><div>My solution is going to be to use that process pair number modulo MPI_TAG_UB. Does anyone have a slicker suggestion?</div></div></blockquote><div><br></div><div><br></div><div>I think it should be possible to use the adjacency graph associated with the neighbour ranks which is defined within <span style="caret-color: rgb(0, 0, 0);"><font color="#000000">_DMSwarmDataExCompleteCommunicationMap() in the Mat object.</font></span></div><div><br></div><div>If Intel MPI cannot support tags greater than 1024, the proposition above is going to be of limited value.</div><div>A job with 100 MPI ranks, with subdomains which each have 11 neighbour ranks will exceed 1024.</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div><br></div><div> Thanks,</div><div><br></div><div> Matt<br clear="all"><div><br></div>-- <br><div dir="ltr"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener</div><div><br></div><div><a href="http://www.cse.buffalo.edu/~knepley/" target="_blank">https://www.cse.buffalo.edu/~knepley/</a><br></div></div></div></div></div></div></div></div></div>
</blockquote></div></div>