[petsc-dev] Swarm tag error

Matthew Knepley knepley at gmail.com
Tue Nov 22 23:59:23 CST 2022


On Tue, Nov 22, 2022 at 11:23 PM Junchao Zhang <junchao.zhang at gmail.com>
wrote:

> I don't understand why you need so many tags.  Is the
> communication pattern actually MPI_Alltoallv, but you implemented it in
> MPI_Send/Recv?
>

I am preserving the original design from Dave until we do a more thorough
rewrite. I think he is using a different tag for each pair of processes to
make debugging easier.

I don't think Alltoallv is appropriate most of the time. If you had a lot
of particles with a huge spread of velocities then you could get that, but
most
scenarios I think look close to nearest neighbor.

  Thanks,

      Matt


> --Junchao Zhang
>
>
> On Mon, Nov 21, 2022 at 2:37 PM Matthew Knepley <knepley at gmail.com> wrote:
>
>> In data_ex.c, Swarm uses a distinct tag for each pair of processes. If
>> the number of processes exceeds 1024, there are > 1024^2 tags which exceeds
>> MPI_TAG_UB on Intel MPI.
>>
>> My solution is going to be to use that process pair number modulo
>> MPI_TAG_UB. Does anyone have a slicker suggestion?
>>
>>   Thanks,
>>
>>       Matt
>>
>> --
>> What most experimenters take for granted before they begin their
>> experiments is infinitely more interesting than any results to which their
>> experiments lead.
>> -- Norbert Wiener
>>
>> https://www.cse.buffalo.edu/~knepley/
>> <http://www.cse.buffalo.edu/~knepley/>
>>
>

-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20221122/676ff3ea/attachment.html>


More information about the petsc-dev mailing list