[petsc-users] locate DMSwarm particles with respect to a background DMDA mesh

Matteo Semplice matteo.semplice at uninsubria.it
Fri Dec 2 05:48:41 CST 2022


Hi.
I am sorry to take this up again, but further tests show that it's not 
right yet.

Il 04/11/22 12:48, Matthew Knepley ha scritto:
> On Fri, Nov 4, 2022 at 7:46 AM Matteo Semplice 
> <matteo.semplice at uninsubria.it> wrote:
>
>     On 04/11/2022 02:43, Matthew Knepley wrote:
>>     On Thu, Nov 3, 2022 at 8:36 PM Matthew Knepley
>>     <knepley at gmail.com> wrote:
>>
>>         On Thu, Oct 27, 2022 at 11:57 AM Semplice Matteo
>>         <matteo.semplice at uninsubria.it> wrote:
>>
>>             Dear Petsc developers,
>>             I am trying to use a DMSwarm to locate a cloud of points
>>             with respect to a background mesh. In the real
>>             application the points will be loaded from disk, but I
>>             have created a small demo in which
>>
>>               * each processor creates Npart particles, all within
>>                 the domain covered by the mesh, but not all in the
>>                 local portion of the mesh
>>               * migrate the particles
>>
>>             After migration most particles are not any more in the
>>             DMSwarm (how many and which ones seems to depend on the
>>             number of cpus, but it never happens that all particle
>>             survive the migration process).
>>
>>             I am clearly missing some step, since I'd expect that a
>>             DMDA would be able to locate particles without the need
>>             to go through a DMShell as it is done in
>>             src/dm/tutorials/swarm_ex3.c.html
>>             <https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fpetsc.org%2Fmain%2Fsrc%2Fdm%2Ftutorials%2Fswarm_ex3.c.html&data=05%7C01%7Cmatteo.semplice%40uninsubria.it%7C60177cfedb2948f480d008dabe5a7e7a%7C9252ed8bdffc401c86ca6237da9991fa%7C0%7C0%7C638031593122164987%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=43%2FIiK5P%2FhBStaVKhonw0p7wb87L8jqf2MheL9E1gR0%3D&reserved=0>
>>
>>             I attach my demo code.
>>
>>             Could someone give me a hint?
>>
>>
>>         Thanks for sending this. I found the problem. Someone has
>>         some overly fancy code inside DMDA to figure out the local
>>         bounding box from the coordinates.
>>         It is broken for DM_BOUNDARY_GHOSTED, but we never tested
>>         with this. I will fix it.
>>
>>
>>     Okay, I think this fix is correct
>>
>>     https://gitlab.com/petsc/petsc/-/merge_requests/5802
>>     <https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgitlab.com%2Fpetsc%2Fpetsc%2F-%2Fmerge_requests%2F5802&data=05%7C01%7Cmatteo.semplice%40uninsubria.it%7C60177cfedb2948f480d008dabe5a7e7a%7C9252ed8bdffc401c86ca6237da9991fa%7C0%7C0%7C638031593122164987%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=trNMoF9qIT91tBwkyatWdl7%2BB0Jy0AtJdsL8oYmN7sU%3D&reserved=0>
>>
>>     I incorporated your test as src/dm/impls/da/tests/ex1.c. Can you
>>     take a look and see if this fixes your issue?
>
>     Yes, we have tested 2d and 3d, with various combinations of
>     DM_BOUNDARY_* along different directions and it works like a charm.
>
>     On a side note, neither DMSwarmViewXDMF nor DMSwarmMigrate seem to
>     be implemented for 1d: I get
>
>     [0]PETSC ERROR: No support for this operation for this object
>     type[0]PETSC ERROR: Support not provided for 1D
>
>     However, currently I have no need for this feature.
>
>     Finally, if the test is meant to stay in the source, you may
>     remove the call to DMSwarmRegisterPetscDatatypeField as in the
>     attached patch.
>
>     Thanks a lot!!
>
> Thanks! Glad it works.
>
>    Matt
>
There are still problems when not using 1,2 or 4 cpus. Any other number 
of cpus that I've tested does not work corectly.

I have modified src/dm/impls/da/tests/ex1.c to show the problem (see 
attached patch). In particular, I have introduced a -Nx option to allow 
grid sizes higher than the standard 4x4 and also suppressed by default 
particle coordinates output (but there's an option -part_view that 
recreates the old behaviour). Here's the output on my machine:

$ mpirun -np 1 ./da_test_ex1 -Nx 40
Total of 8 particles
... calling DMSwarmMigrate ...
Total of 8 particles

$ mpirun -np 2 ./da_test_ex1 -Nx 40
Total of 16 particles
... calling DMSwarmMigrate ...
Total of 16 particles

$ mpirun -np 4 ./da_test_ex1 -Nx 40
Total of 32 particles
... calling DMSwarmMigrate ...
Total of 32 particles

$ mpirun -np 3 ./da_test_ex1 -Nx 40
Total of 24 particles
... calling DMSwarmMigrate ...
Total of 22 particles

$ mpirun -oversubscribe -np 5 ./da_test_ex1 -Nx 40
Total of 40 particles
... calling DMSwarmMigrate ...
Total of 22 particles

$mpirun -oversubscribe -np 8 ./da_test_ex1 -Nx 40
Total of 64 particles
... calling DMSwarmMigrate ...
Total of 46 particles

As you see, only 1,2,4 cpus do not lose particles.

(The test could be easily modified to return nTotPart/predistribution - 
nTotPart/postdistribution so that it will have a nonzero exit code in 
case it loses particles)

Best

     Matteo

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20221202/40bb1cfd/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: da-tests-ex1.patch
Type: text/x-patch
Size: 2079 bytes
Desc: not available
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20221202/40bb1cfd/attachment-0001.bin>


More information about the petsc-users mailing list