[petsc-dev] Swarm tag error
Stefano Zampini
stefano.zampini at gmail.com
Wed Nov 23 14:18:49 CST 2022
If ranks are uniquely listed in neighbour_procs, then you only need 1 fresh tag per communication round from PetscCommGetNewTag
> On Nov 23, 2022, at 9:08 PM, Dave May <dave.mayhem23 at gmail.com> wrote:
>
>
>
> On Wed, 23 Nov 2022 at 08:57, Junchao Zhang <junchao.zhang at gmail.com <mailto:junchao.zhang at gmail.com>> wrote:
> From my reading, the code actually does not need multiple tags. You can just let _get_tags() return a constant (say 0), or use your modulo MPI_TAG_UB approach.
>
> Yes I believe that is correct.
>
>
>
> 541 for (i = 0; i < np; ++i) PetscCallMPI(MPI_Isend(&de->messages_to_be_sent[i], 1, MPIU_INT, de->neighbour_procs[i], de->send_tags[i], de->comm, &de->_requests[i]));
> 542 for (i = 0; i < np; ++i) PetscCallMPI(MPI_Irecv(&de->messages_to_be_recvieved[i], 1, MPIU_INT, de->neighbour_procs[i], de->recv_tags[i], de->comm, &de->_requests[np + i]));
>
> --Junchao Zhang
>
>
> On Tue, Nov 22, 2022 at 11:59 PM Matthew Knepley <knepley at gmail.com <mailto:knepley at gmail.com>> wrote:
> On Tue, Nov 22, 2022 at 11:23 PM Junchao Zhang <junchao.zhang at gmail.com <mailto:junchao.zhang at gmail.com>> wrote:
> I don't understand why you need so many tags. Is the communication pattern actually MPI_Alltoallv, but you implemented it in MPI_Send/Recv?
>
> I am preserving the original design from Dave until we do a more thorough rewrite. I think he is using a different tag for each pair of processes to
> make debugging easier.
>
> I don't think Alltoallv is appropriate most of the time. If you had a lot of particles with a huge spread of velocities then you could get that, but most
> scenarios I think look close to nearest neighbor.
>
> Thanks,
>
> Matt
>
> --Junchao Zhang
>
>
> On Mon, Nov 21, 2022 at 2:37 PM Matthew Knepley <knepley at gmail.com <mailto:knepley at gmail.com>> wrote:
> In data_ex.c, Swarm uses a distinct tag for each pair of processes. If the number of processes exceeds 1024, there are > 1024^2 tags which exceeds MPI_TAG_UB on Intel MPI.
>
> My solution is going to be to use that process pair number modulo MPI_TAG_UB. Does anyone have a slicker suggestion?
>
> Thanks,
>
> Matt
>
> --
> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.
> -- Norbert Wiener
>
> https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/>
>
>
> --
> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.
> -- Norbert Wiener
>
> https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20221123/1bcdb581/attachment.html>
More information about the petsc-dev
mailing list