[petsc-users] locate DMSwarm particles with respect to a background DMDA mesh

Matthew Knepley knepley at gmail.com
Thu Dec 22 07:02:08 CST 2022


On Thu, Dec 22, 2022 at 6:28 AM Matteo Semplice <
matteo.semplice at uninsubria.it> wrote:

> Dear all
>
>     please ignore my previous email and read this one: I have better
> localized the problem. Maybe DMSwarmMigrate is designed to migrate
> particles only to first neighbouring ranks?
>
Yes, I believe that was the design.

Dave, is this correct?

  Thanks,

    Matt


> Il 22/12/22 11:44, Matteo Semplice ha scritto:
>
> Dear everybody,
>
>     I have bug a bit into the code and I am able to add more information.
> Il 02/12/22 12:48, Matteo Semplice ha scritto:
>
> Hi.
> I am sorry to take this up again, but further tests show that it's not
> right yet.
>
> Il 04/11/22 12:48, Matthew Knepley ha scritto:
>
> On Fri, Nov 4, 2022 at 7:46 AM Matteo Semplice <
> matteo.semplice at uninsubria.it> wrote:
>
>> On 04/11/2022 02:43, Matthew Knepley wrote:
>>
>> On Thu, Nov 3, 2022 at 8:36 PM Matthew Knepley <knepley at gmail.com> wrote:
>>
>>> On Thu, Oct 27, 2022 at 11:57 AM Semplice Matteo <
>>> matteo.semplice at uninsubria.it> wrote:
>>>
>>>> Dear Petsc developers,
>>>>     I am trying to use a DMSwarm to locate a cloud of points with
>>>> respect to a background mesh. In the real application the points will be
>>>> loaded from disk, but I have created a small demo in which
>>>>
>>>>    - each processor creates Npart particles, all within the domain
>>>>    covered by the mesh, but not all in the local portion of the mesh
>>>>    - migrate the particles
>>>>
>>>> After migration most particles are not any more in the DMSwarm (how
>>>> many and which ones seems to depend on the number of cpus, but it never
>>>> happens that all particle survive the migration process).
>>>>
>>>> Thanks for sending this. I found the problem. Someone has some overly
>>> fancy code inside DMDA to figure out the local bounding box from the
>>> coordinates.
>>> It is broken for DM_BOUNDARY_GHOSTED, but we never tested with this. I
>>> will fix it.
>>>
>>
>> Okay, I think this fix is correct
>>
>>   https://gitlab.com/petsc/petsc/-/merge_requests/5802
>> <https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgitlab.com%2Fpetsc%2Fpetsc%2F-%2Fmerge_requests%2F5802&data=05%7C01%7Cmatteo.semplice%40uninsubria.it%7Cf4d64b09df1f438437ad08dad45b342b%7C9252ed8bdffc401c86ca6237da9991fa%7C0%7C0%7C638055785720875500%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C2000%7C%7C%7C&sdata=ISLaLhhnYU4njkYfod%2F3tEiIOIV5uZvmiAlKZ2PvhmE%3D&reserved=0>
>>
>> I incorporated your test as src/dm/impls/da/tests/ex1.c. Can you take a
>> look and see if this fixes your issue?
>>
>> Yes, we have tested 2d and 3d, with various combinations of DM_BOUNDARY_*
>> along different directions and it works like a charm.
>>
>> On a side note, neither DMSwarmViewXDMF nor DMSwarmMigrate seem to be
>> implemented for 1d: I get
>>
>> [0]PETSC ERROR: No support for this operation for this object type
>>                                                                                                                                             [0]PETSC
>> ERROR: Support not provided for 1D
>>
>> However, currently I have no need for this feature.
>>
>> Finally, if the test is meant to stay in the source, you may remove the
>> call to DMSwarmRegisterPetscDatatypeField as in the attached patch.
>>
>> Thanks a lot!!
>>
> Thanks! Glad it works.
>
>    Matt
>
> There are still problems when not using 1,2 or 4 cpus. Any other number of
> cpus that I've tested does not work corectly.
>
> I have now modified private_DMDALocatePointsIS_2D_Regular to print out
> some debugging information. I see that this is called twice during
> migration, once before and once after DMSwarmMigrate_DMNeighborScatter. If
> I understand correctly, the second call to
> private_DMDALocatePointsIS_2D_Regular should be able to locate all
> particles owned by the rank but it fails for some of them because they have
> been sent to the wrong rank (despite being well away from process
> boundaries).
>
> For example, running the example src/dm/impls/da/tests/ex1.c with Nx=21
> (20x20 Q1 elements on [-1,1]X[-1,1]) with 3 processors,
>
> - the particles (-0.191,-0.462) and (0.191,-0.462) are sent cpu2 instead
> of cpu0
>
> - those at (-0.287,-0.693)and (0.287,-0.693) are sent to cpu1 instead of
> cpu0
>
> - those at (0.191,0.462) and (-0.191,0.462) are sent to cpu0 instead of
> cpu2
>
> (This is 2d and thus not affected by the 3d issue mentioned yesterday on
> petsc-dev. Tests were made based on the release branch pulled out this
> morning, i.e. on commit bebdc8d016f).
>
> I see: particles are sent "all around" and not only to the destination
> rank.
>
> Still however, running the example src/dm/impls/da/tests/ex1.c with Nx=21
> (20x20 Q1 elements on [-1,1]X[-1,1]) with 3 processors, there are 2
> particles initially owned by rank2 (at y=-0.6929 and x=+/-0.2870) that are
> sent only to rank1 and never make it to rank0 and are thus lost in the end
> since rank1, correctly, discards them.
>
> Thanks
>
>     Matteo
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20221222/d7755e79/attachment-0001.html>


More information about the petsc-users mailing list