[petsc-dev] Swarm tag error
Dave May
dave.mayhem23 at gmail.com
Wed Nov 23 12:00:35 CST 2022
On Tue, 22 Nov 2022 at 21:59, Matthew Knepley <knepley at gmail.com> wrote:
> On Tue, Nov 22, 2022 at 11:23 PM Junchao Zhang <junchao.zhang at gmail.com>
> wrote:
>
>> I don't understand why you need so many tags. Is the
>> communication pattern actually MPI_Alltoallv, but you implemented it in
>> MPI_Send/Recv?
>>
>
> I am preserving the original design from Dave until we do a more thorough
> rewrite. I think he is using a different tag for each pair of processes to
> make debugging easier.
>
This is correct.
>
> I don't think Alltoallv is appropriate most of the time. If you had a lot
> of particles with a huge spread of velocities then you could get that, but
> most
> scenarios I think look close to nearest neighbor.
>
> Thanks,
>
> Matt
>
>
>> --Junchao Zhang
>>
>>
>> On Mon, Nov 21, 2022 at 2:37 PM Matthew Knepley <knepley at gmail.com>
>> wrote:
>>
>>> In data_ex.c, Swarm uses a distinct tag for each pair of processes. If
>>> the number of processes exceeds 1024, there are > 1024^2 tags which exceeds
>>> MPI_TAG_UB on Intel MPI.
>>>
>>> My solution is going to be to use that process pair number modulo
>>> MPI_TAG_UB. Does anyone have a slicker suggestion?
>>>
>>> Thanks,
>>>
>>> Matt
>>>
>>> --
>>> What most experimenters take for granted before they begin their
>>> experiments is infinitely more interesting than any results to which their
>>> experiments lead.
>>> -- Norbert Wiener
>>>
>>> https://www.cse.buffalo.edu/~knepley/
>>> <http://www.cse.buffalo.edu/~knepley/>
>>>
>>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
> https://www.cse.buffalo.edu/~knepley/
> <http://www.cse.buffalo.edu/~knepley/>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20221123/14640d18/attachment.html>
More information about the petsc-dev
mailing list