[petsc-users] Help for MatNullSpaceCreateRigidBody

Mark Adams mfadams at lbl.gov
Tue Dec 5 10:09:27 CST 2023


I would suggest (excuse me if I missed something):

*** Test a simple Jacobi solver in serial and parallel and verify that the
convergence history (ksp_monitor) are identical to round-off error
*** Test GAMG, serial and parallel, without MatNullSpaceCreateRigidBody and
verify that the convergence is close, say well within 20% in the number of
iterations
*** Next you can get the null space vectors (v_i) and compute q = A*v_i and
verify that each q is zero except for the BCs.
  - You could remove the BCs from A, temporarily, and the q should have
norm machine epsilon to make this test simpler.
  - No need to solve this no-BC A solve.

Mark

On Tue, Dec 5, 2023 at 8:46 AM Matthew Knepley <knepley at gmail.com> wrote:

> On Tue, Dec 5, 2023 at 7:57 AM Jordi Manyer Fuertes <
> jordi.manyer at monash.edu> wrote:
>
>> Thanks for the prompt response. Both answers look like what I'm doing.
>>
>> After playing a bit more with solver, I managed to make it run in
>> parallel with different boundary conditions (full dirichlet bcs, vs mixed
>> newmann + dirichlet). This raises two questions:
>>
>> - How relevant are boundary conditions (eliminating dirichlet rows/cols
>> vs weak newmann bcs) to the solver? Should I modify something when changing
>> boundary conditions?
>>
>> The rigid body kernel is independent of boundary conditions, and is only
> really important for the coarse grids. However, it is really easy to ruin a
> solve with inconsistent boundary conditions, or with conditions which cause
> a singularity at a change point.
>
>> - Also, the solver did well with the old bcs when run in a single
>> processor (but not in parallel). This seems odd, since parallel and serial
>> behavior should be consistent (or not?). Could it be fault of the PCGAMG?
>>
> This is unlikely. We have many parallel tests of elasticity (SNES ex17,
> ex56, ex77, etc). We do not see problems. It seems more likely that the
> system might not be assembled correctly in parallel. Did you check that the
> matrices match?
>
>> I believe the default local solver is ILU, shoud I be changing it to LU
>> or something else for these kind of problems?
>>
> Do you mean the smoother for AMG? No, the default is Chebyshev/Jacobi,
> which is the same in parallel.
>
>   Thanks,
>
>      Matt
>
> Thank you both again,
>>
>> Jordi
>>
>>
>> On 5/12/23 04:46, Matthew Knepley wrote:
>>
>> On Mon, Dec 4, 2023 at 12:01 PM Jordi Manyer Fuertes via petsc-users <
>> petsc-users at mcs.anl.gov> wrote:
>>
>>> Dear PETSc users/developpers,
>>>
>>> I am currently trying to use the method `MatNullSpaceCreateRigidBody`
>>> together with `PCGAMG` to efficiently precondition an elasticity solver
>>> in 2D/3D.
>>>
>>> I have managed to make it work in serial (or with 1 MPI rank) with
>>> h-independent number of iterations (which is great), but the solver
>>> diverges in parallel.
>>>
>>> I assume it has to do with the coordinate vector I am building the
>>> null-space with not being correctly setup. The documentation is not that
>>> clear on which nodes exactly have to be set in each partition. Does it
>>> require nodes corresponding to owned dofs, or all dofs in each partition
>>> (owned+ghost)? What ghost layout should the `Vec` have?
>>>
>>> Any other tips about what I might be doing wrong?
>>>
>>
>> What we assume is that you have some elastic problem formulated in primal
>> unknowns (displacements) so that the solution vector looks like this:
>>
>>   [ d^0_x d^0_y d^0_z d^1_x ..... ]
>>
>> or whatever spatial dimension you have. We expect to get a global vector
>> that looks like that, but instead
>> of displacements, we get the coordinates that each displacement
>> corresponds to. We make the generators of translations:
>>
>>   [ 1 0 0 1 0 0 1 0 0 1 0 0... ]
>>   [ 0 1 0 0 1 0 0 1 0 0 1 0... ]
>>   [ 0 0 1 0 0 1 0 0 1 0 0 1... ]
>>
>> for which we do not need the coordinates, and then the generators of
>> rotations about each axis, for which
>> we _do_ need the coordinates, since we need to know how much each point
>> moves if you rotate about some center.
>>
>>   Does that make sense?
>>
>>    Thanks,
>>
>>       Matt
>>
>>
>>
>>> Thanks,
>>>
>>> Jordi
>>>
>>>
>>
>> --
>> What most experimenters take for granted before they begin their
>> experiments is infinitely more interesting than any results to which their
>> experiments lead.
>> -- Norbert Wiener
>>
>> https://www.cse.buffalo.edu/~knepley/
>> <http://www.cse.buffalo.edu/~knepley/>
>>
>>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
> https://www.cse.buffalo.edu/~knepley/
> <http://www.cse.buffalo.edu/~knepley/>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20231205/d91f9f2d/attachment.html>


More information about the petsc-users mailing list