<div dir="ltr"><div dir="ltr">On Tue, Dec 5, 2023 at 7:57 AM Jordi Manyer Fuertes <<a href="mailto:jordi.manyer@monash.edu">jordi.manyer@monash.edu</a>> wrote:<br></div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><u></u>
<div>
<p>Thanks for the prompt response. Both answers look like what I'm
doing. <br>
</p>
<p>After playing a bit more with solver, I managed to make it run in
parallel with different boundary conditions (full dirichlet bcs,
vs mixed newmann + dirichlet). This raises two questions: <br>
</p>
<p>- How relevant are boundary conditions (eliminating dirichlet
rows/cols vs weak newmann bcs) to the solver? Should I modify
something when changing boundary conditions? <br>
</p>
<p></p></div></blockquote><div>The rigid body kernel is independent of boundary conditions, and is only really important for the coarse grids. However, it is really easy to ruin a solve with inconsistent boundary conditions, or with conditions which cause a singularity at a change point.</div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><p>- Also, the solver did well with the old bcs when run in a single
processor (but not in parallel). This seems odd, since parallel
and serial behavior should be consistent (or not?). Could it be
fault of the PCGAMG?</p></div></blockquote><div>This is unlikely. We have many parallel tests of elasticity (SNES ex17, ex56, ex77, etc). We do not see problems. It seems more likely that the system might not be assembled correctly in parallel. Did you check that the matrices match? <br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><p> I believe the default local solver is ILU,
shoud I be changing it to LU or something else for these kind of
problems?</p></div></blockquote><div>Do you mean the smoother for AMG? No, the default is Chebyshev/Jacobi, which is the same in parallel.</div><div><br></div><div> Thanks,</div><div><br></div><div> Matt</div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div>
<p>Thank you both again,</p>
<p>Jordi<br>
</p>
<p><br>
</p>
<div>On 5/12/23 04:46, Matthew Knepley
wrote:<br>
</div>
<blockquote type="cite">
<div dir="ltr">
<div dir="ltr">On Mon, Dec 4, 2023 at 12:01 PM Jordi Manyer
Fuertes via petsc-users <<a href="mailto:petsc-users@mcs.anl.gov" target="_blank">petsc-users@mcs.anl.gov</a>>
wrote:<br>
</div>
<div class="gmail_quote">
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Dear
PETSc users/developpers,<br>
<br>
I am currently trying to use the method
`MatNullSpaceCreateRigidBody` <br>
together with `PCGAMG` to efficiently precondition an
elasticity solver <br>
in 2D/3D.<br>
<br>
I have managed to make it work in serial (or with 1 MPI
rank) with <br>
h-independent number of iterations (which is great), but the
solver <br>
diverges in parallel.<br>
<br>
I assume it has to do with the coordinate vector I am
building the <br>
null-space with not being correctly setup. The documentation
is not that <br>
clear on which nodes exactly have to be set in each
partition. Does it <br>
require nodes corresponding to owned dofs, or all dofs in
each partition <br>
(owned+ghost)? What ghost layout should the `Vec` have?<br>
<br>
Any other tips about what I might be doing wrong?<br>
</blockquote>
<div><br>
</div>
<div>What we assume is that you have some elastic problem
formulated in primal unknowns (displacements) so that the
solution vector looks like this:</div>
<div><br>
</div>
<div> [ d^0_x d^0_y d^0_z d^1_x ..... ]</div>
<div><br>
</div>
<div>or whatever spatial dimension you have. We expect to get
a global vector that looks like that, but instead</div>
<div>of displacements, we get the coordinates that each
displacement corresponds to. We make the generators of
translations:</div>
<div><br>
</div>
<div> [ 1 0 0 1 0 0 1 0 0 1 0 0... ]</div>
<div>
<div> [ 0 1 0 0 1 0 0 1 0 0 1 0... ]</div>
<div>
<div> [ 0 0 1 0 0 1 0 0 1 0 0 1... ]</div>
<div><br>
</div>
<div>for which we do not need the coordinates, and then
the generators of rotations about each axis, for which</div>
<div>we _do_ need the coordinates, since we need to know
how much each point moves if you rotate about some
center.</div>
<div><br>
</div>
<div> Does that make sense?</div>
<div><br>
</div>
<div> Thanks,</div>
<div><br>
</div>
<div> Matt</div>
<div><br>
</div>
</div>
</div>
<div> </div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
Thanks,<br>
<br>
Jordi<br>
<br>
</blockquote>
</div>
<br clear="all">
<div><br>
</div>
<span class="gmail_signature_prefix">-- </span><br>
<div dir="ltr" class="gmail_signature">
<div dir="ltr">
<div>
<div dir="ltr">
<div>
<div dir="ltr">
<div>What most experimenters take for granted before
they begin their experiments is infinitely more
interesting than any results to which their
experiments lead.<br>
-- Norbert Wiener</div>
<div><br>
</div>
<div><a href="http://www.cse.buffalo.edu/~knepley/" target="_blank">https://www.cse.buffalo.edu/~knepley/</a><br>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</blockquote>
</div>
</blockquote></div><br clear="all"><div><br></div><span class="gmail_signature_prefix">-- </span><br><div dir="ltr" class="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener</div><div><br></div><div><a href="http://www.cse.buffalo.edu/~knepley/" target="_blank">https://www.cse.buffalo.edu/~knepley/</a><br></div></div></div></div></div></div></div></div>