<html><head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
</head>
<body>
<p>Dear everybody,</p>
<p> I have bug a bit into the code and I am able to add more
information.<br>
</p>
<div class="moz-cite-prefix">Il 02/12/22 12:48, Matteo Semplice ha
scritto:<br>
</div>
<blockquote type="cite" cite="mid:35ebfa58-eed5-39fa-8b3d-918ff9d7e633@uninsubria.it">
<div class="moz-cite-prefix">Hi.</div>
<div class="moz-cite-prefix">I am sorry to take this up again, but
further tests show that it's not right yet.<br>
</div>
<div class="moz-cite-prefix"><br>
</div>
<div class="moz-cite-prefix">Il 04/11/22 12:48, Matthew Knepley ha
scritto:<br>
</div>
<blockquote type="cite" cite="mid:CAMYG4Gnznu1ct7GhjYNcBVRqPmpuVNnz1O_4OqM-_s9fZ28=vA@mail.gmail.com">
<div dir="ltr">
<div dir="ltr">On Fri, Nov 4, 2022 at 7:46 AM Matteo Semplice
<<a href="mailto:matteo.semplice@uninsubria.it" moz-do-not-send="true" class="moz-txt-link-freetext">matteo.semplice@uninsubria.it</a>>
wrote:<br>
</div>
<div class="gmail_quote">
<blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left:1px solid
rgb(204,204,204);padding-left:1ex">
<div> On 04/11/2022 02:43, Matthew Knepley wrote:<br>
<blockquote type="cite">
<div dir="ltr">
<div dir="ltr">On Thu, Nov 3, 2022 at 8:36 PM
Matthew Knepley <<a href="mailto:knepley@gmail.com" target="_blank" moz-do-not-send="true" class="moz-txt-link-freetext">knepley@gmail.com</a>>
wrote:<br>
</div>
<div class="gmail_quote">
<blockquote class="gmail_quote" style="margin:0px
0px 0px 0.8ex;border-left:1px solid
rgb(204,204,204);padding-left:1ex">
<div dir="ltr">
<div dir="ltr">On Thu, Oct 27, 2022 at 11:57
AM Semplice Matteo <<a href="mailto:matteo.semplice@uninsubria.it" target="_blank" moz-do-not-send="true" class="moz-txt-link-freetext">matteo.semplice@uninsubria.it</a>>
wrote:<br>
</div>
<div class="gmail_quote">
<blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left:1px solid
rgb(204,204,204);padding-left:1ex">
<div>
<div dir="ltr">
<div style="font-family:Calibri,Arial,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0);background-color:rgb(255,255,255)">Dear
Petsc developers,</div>
<div style="font-family:Calibri,Arial,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0);background-color:rgb(255,255,255)">
I am trying to use a DMSwarm to
locate a cloud of points with
respect to a background mesh. In the
real application the points will be
loaded from disk, but I have created
a small demo in which</div>
<div style="font-family:Calibri,Arial,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0);background-color:rgb(255,255,255)">
<ul>
<li><span>each processor creates
Npart particles, all within
the domain covered by the
mesh, but not all in the local
portion of the mesh</span></li>
<li><span>migrate the particles</span></li>
</ul>
<div>After migration most particles
are not any more in the DMSwarm
(how many and which ones seems to
depend on the number of cpus, but
it never happens that all particle
survive the migration process).</div>
<div><br>
</div>
</div>
</div>
</div>
</blockquote>
<div>Thanks for sending this. I found the
problem. Someone has some overly fancy
code inside DMDA to figure out the local
bounding box from the coordinates.</div>
<div>It is broken for DM_BOUNDARY_GHOSTED,
but we never tested with this. I will fix
it.</div>
</div>
</div>
</blockquote>
<div><br>
</div>
<div>Okay, I think this fix is correct</div>
<div><br>
</div>
<div> <a href="https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgitlab.com%2Fpetsc%2Fpetsc%2F-%2Fmerge_requests%2F5802&data=05%7C01%7Cmatteo.semplice%40uninsubria.it%7Cf4d64b09df1f438437ad08dad45b342b%7C9252ed8bdffc401c86ca6237da9991fa%7C0%7C0%7C638055785720875500%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C2000%7C%7C%7C&sdata=ISLaLhhnYU4njkYfod%2F3tEiIOIV5uZvmiAlKZ2PvhmE%3D&reserved=0" originalsrc="https://gitlab.com/petsc/petsc/-/merge_requests/5802" shash="PuvAgB7girkyJOr0PaoyM6ND/gKKaTLgVWtpHONXwmBNgX1xXkxIqoTLn/9EWhBzck+xBHuOHElpbZM0juMi1YgTJ/9UK2cwFTmlByMgSl3Aa14QI0SetAqu8s4vEzsJDt51uhIxJ0xSMxzpYWLN8lxcQmJ6IhM3n5rYrzps9QY=" target="_blank" moz-do-not-send="true">https://gitlab.com/petsc/petsc/-/merge_requests/5802</a></div>
<div><br>
</div>
<div>I incorporated your test as
src/dm/impls/da/tests/ex1.c. Can you take a look
and see if this fixes your issue?</div>
</div>
</div>
</blockquote>
<p>Yes, we have tested 2d and 3d, with various
combinations of DM_BOUNDARY_* along different
directions and it works like a charm.</p>
<p>On a side note, neither <span style="font-family:monospace"><span style="color:rgb(0,0,0);background-color:rgb(255,255,255)">DMSwarmViewXDMF</span></span>
nor <span style="font-family:monospace"><span style="color:rgb(0,0,0);background-color:rgb(255,255,255)">DMSwarmMigrate</span></span>
seem to be implemented for 1d: I get</p>
<p><span style="font-family:monospace"><span style="color:rgb(0,0,0);background-color:rgb(255,255,255)">[0]PETSC
ERROR: No support for this operation for this
object type</span><span style="color:rgb(178,24,24);background-color:rgb(255,255,255)">
</span><span style="color:rgb(0,0,0);background-color:rgb(255,255,255)"> </span>[0]PETSC
ERROR: Support not provided for 1D<br>
</span></p>
<p>However, currently I have no need for this feature.<br>
<span style="font-family:monospace"></span></p>
<p>Finally, if the test is meant to stay in the source,
you may remove the call to <span style="font-family:monospace"><span style="color:rgb(0,0,0);background-color:rgb(255,255,255)">DMSwarmRegisterPetscDatatypeField</span></span>
as in the attached patch.<br>
</p>
<p>Thanks a lot!!</p>
</div>
</blockquote>
<div>Thanks! Glad it works.</div>
<div>
<p> Matt</p>
</div>
</div>
</div>
</blockquote>
<p class="moz-signature" cols="72">There are still problems when
not using 1,2 or 4 cpus. Any other number of cpus that I've
tested does not work corectly.<br>
</p>
</blockquote>
<p>I have now modified private_DMDALocatePointsIS_2D_Regular to
print out some debugging information. I see that this is called
twice during migration, once before and once after
DMSwarmMigrate_DMNeighborScatter. If I understand correctly, the
second call to private_DMDALocatePointsIS_2D_Regular should be
able to locate all particles owned by the rank but it fails for
some of them because they have been sent to the wrong rank
(despite being well away from process boundaries).<br>
</p>
<p>For example, running the example src/dm/impls/da/tests/ex1.c with
Nx=21 (20x20 Q1 elements on [-1,1]X[-1,1]) with 3 processors,</p>
<p>- the particles (-0.191,-0.462) and (0.191,-0.462) are sent cpu2
instead of cpu0</p>
<p>- those at (-0.287,-0.693)and (0.287,-0.693) are sent to cpu1
instead of cpu0</p>
<p>- those at (0.191,0.462) and (-0.191,0.462) are sent to cpu0
instead of cpu2</p>
<p>(This is 2d and thus not affected by the 3d issue mentioned
yesterday on petsc-dev. Tests were made based on the release
branch pulled out this morning, i.e. on commit <span style="font-family:monospace"><span style="color:#b26818;background-color:#ffffff;">bebdc8d016f</span></span>).<br>
</p>
<p>I attach the output separated by process.<br>
</p>
<p>If you have any hints, they would be appreciated.<br>
</p>
<p>Thanks</p>
<p> Matteo<br>
</p>
</body>
</html>