[petsc-users] A number of questions about DMDA with SNES and Quasi-Newton methods
zakaryah .
zakaryah at gmail.com
Tue Oct 10 12:47:24 CDT 2017
Thanks Lukasz, this is extremely helpful.
By dynamic relaxation, do you mean converting the elliptic PDEs
(Euler-Lagrange eqs.) to parabolic PDEs by introducing time, i.e. instead
of first variation = 0, solve dx/dt = -nu first variation, where nu is like
a viscosity? It seems to me that this should approach an extremum of the
Lagrangian. What do you mean by not consistent?
I will look into this and the other ideas you mentioned - thanks again!
Cheers, Zak
On Tue, Oct 10, 2017 at 1:06 PM, Lukasz Kaczmarczyk <
Lukasz.Kaczmarczyk at glasgow.ac.uk> wrote:
>
>
>
> On 10 Oct 2017, at 17:08, zakaryah . <zakaryah at gmail.com> wrote:
>
> Thanks for clearing that up.
>
> I'd appreciate any further help. Here's a summary:
>
> My ultimate goal is to find a vector field which minimizes an action. The
> action is a (nonlinear) function of the field and its first spatial
> derivatives.
>
> My current approach is to derive the (continuous) Euler-Lagrange
> equations, which results in a nonlinear PDE that the minimizing field must
> satisfy. These Euler-Lagrange equations are then discretized, and I'm
> trying to use an SNES to solve them.
>
> The problem is that the solver seems to reach a point at which the
> Jacobian (this corresponds to the second variation of the action, which is
> like a Hessian of the energy) becomes nearly singular, but where the
> residual (RHS of PDE) is not close to zero. The residual does not decrease
> over additional SNES iterations, and the line search results in tiny step
> sizes. My interpretation is that this point of stagnation is a critical
> point.
>
> I have checked the hand-coded Jacobian very carefully and I am confident
> that it is correct.
>
> I am guessing that such a situation is well-known in the field, but I
> don't know the lingo or literature. If anyone has suggestions I'd be
> thrilled. Are there documentation/methodologies within PETSc for this type
> of situation?
>
> Is there any advantage to discretizing the action itself and using the
> optimization routines? With minor modifications I'll have the gradient and
> Hessian calculations coded. Are the optimization routines likely to
> stagnate in the same way as the nonlinear solver, or can they take
> advantage of the structure of the problem to overcome this?
>
> Thanks a lot in advance for any help.
>
>
>
> Hello,
>
> The problem similar to yours, i.e. singular tangent matrix is well known
> in structural and solid mechanics when the structure becomes unstable and
> buckling. There are good and bad methods to solve this.
>
> One is dynamic relaxation, which is not consistent. A good method is to
> use control equation, which controls external forces/boundary conditions.
> Search for spherical arc-length control and many derivatives of this method.
>
> Control equation is a scalar equation, and add "moustaches" to the matrix,
> you can make shell preconditioner which solves such system efficiently with
> iterative solvers. I using this for some time with petsc with great
> success.
>
> Regards,
> Lukasz
>
>
>
> On Sun, Oct 8, 2017 at 5:57 AM, Barry Smith <bsmith at mcs.anl.gov> wrote:
>
>>
>> There is apparently confusing in understanding the ordering. Is this
>> all on one process that you get funny results? Are you using
>> MatSetValuesStencil() to provide the matrix (it is generally easier than
>> providing it yourself). In parallel MatView() always maps the rows and
>> columns to the natural ordering before printing, if you use a matrix
>> created from the DMDA. If you create the matrix yourself it has a different
>> MatView in parallel that is in in thePETSc ordering.\
>>
>>
>> Barry
>>
>>
>>
>> > On Oct 8, 2017, at 8:05 AM, zakaryah . <zakaryah at gmail.com> wrote:
>> >
>> > I'm more confused than ever. I don't understand the output of
>> -snes_type test -snes_test_display.
>> >
>> > For the user-defined state of the vector (where I'd like to test the
>> Jacobian), the finite difference Jacobian at row 0 evaluates as:
>> >
>> > row 0: (0, 10768.6) (1, 2715.33) (2, -1422.41) (3, -6121.71) (4,
>> 287.797) (5, 744.695) (9, -1454.66) (10, 6.08793) (11, 148.172) (12,
>> 13.1089) (13, -36.5783) (14, -9.99399) (27, -3423.49) (28, -2175.34)
>> (29, 548.662) (30, 145.753) (31, 17.6603) (32, -15.1079) (36, 76.8575)
>> (37, 16.325) (38, 4.83918)
>> >
>> > But the hand-coded Jacobian at row 0 evaluates as:
>> >
>> > row 0: (0, 10768.6) (1, 2715.33) (2, -1422.41) (3, -6121.71) (4,
>> 287.797) (5, 744.695) (33, -1454.66) (34, 6.08792) (35, 148.172) (36,
>> 13.1089) (37, -36.5783) (38, -9.99399) (231, -3423.49) (232, -2175.34)
>> (233, 548.662) (234, 145.753) (235, 17.6603) (236, -15.1079) (264,
>> 76.8575) (265, 16.325) (266, 4.83917) (267, 0.) (268, 0.) (269, 0.)
>> > and the difference between the Jacobians at row 0 evaluates as:
>> >
>> > row 0: (0, 0.000189908) (1, 7.17315e-05) (2, 9.31778e-05) (3,
>> 0.000514947) (4, 0.000178659) (5, 0.000178217) (9, -2.25457e-05) (10,
>> -6.34278e-06) (11, -5.93241e-07) (12, 9.48544e-06) (13, 4.79709e-06)
>> (14, 2.40016e-06) (27, -0.000335696) (28, -0.000106734) (29,
>> -0.000106653) (30, 2.73119e-06) (31, -7.93382e-07) (32, 1.24048e-07)
>> (36, -4.0302e-06) (37, 3.67494e-06) (38, -2.70115e-06) (39, 0.) (40,
>> 0.) (41, 0.)
>> >
>> > The difference between the column numbering between the finite
>> difference and the hand-coded Jacobians looks like a serious problem to me,
>> but I'm probably missing something.
>> >
>> > I am trying to use a 3D DMDA with 3 dof, a box stencil of width 1, and
>> for this test problem the grid dimensions are 11x7x6. For a grid point
>> x,y,z, and dof c, is the index calculated as c + 3*x + 3*11*y + 3*11*7*z?
>> If so, then the column numbers of the hand-coded Jacobian match those of
>> the 27 point stencil I have in mind. However, I am then at a loss to
>> explain the column numbers in the finite difference Jacobian.
>> >
>> >
>> >
>> >
>> > On Sat, Oct 7, 2017 at 1:49 PM, zakaryah . <zakaryah at gmail.com> wrote:
>> > OK - I ran with -snes_monitor -snes_converged_reason
>> -snes_linesearch_monitor -ksp_monitor -ksp_monitor_true_residual
>> -ksp_converged_reason -pc_type svd -pc_svd_monitor -snes_type newtonls
>> -snes_compare_explicit
>> >
>> > and here is the full error message, output immediately after
>> >
>> > Finite difference Jacobian
>> > Mat Object: 24 MPI processes
>> > type: mpiaij
>> >
>> > [0]PETSC ERROR: --------------------- Error Message
>> --------------------------------------------------------------
>> >
>> > [0]PETSC ERROR: Invalid argument
>> >
>> > [0]PETSC ERROR: Matrix not generated from a DMDA
>> >
>> > [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html
>> for trouble shooting.
>> >
>> > [0]PETSC ERROR: Petsc Release Version 3.7.6, Apr, 24, 2017
>> >
>> > [0]PETSC ERROR: ./CalculateOpticalFlow on a arch-linux2-c-opt named
>> node046.hpc.rockefeller.internal by zfrentz Sat Oct 7 13:44:44 2017
>> >
>> > [0]PETSC ERROR: Configure options --prefix=/ru-auth/local/home/zfrentz/PETSc-3.7.6
>> --download-fblaslapack -with-debugging=0
>> >
>> > [0]PETSC ERROR: #1 MatView_MPI_DA() line 551 in
>> /rugpfs/fs0/home/zfrentz/PETSc/build/petsc-3.7.6/src/dm/impls/da/fdda.c
>> >
>> > [0]PETSC ERROR: #2 MatView() line 901 in /rugpfs/fs0/home/zfrentz/PETSc
>> /build/petsc-3.7.6/src/mat/interface/matrix.c
>> >
>> > [0]PETSC ERROR: #3 SNESComputeJacobian() line 2371 in
>> /rugpfs/fs0/home/zfrentz/PETSc/build/petsc-3.7.6/src/snes/
>> interface/snes.c
>> >
>> > [0]PETSC ERROR: #4 SNESSolve_NEWTONLS() line 228 in
>> /rugpfs/fs0/home/zfrentz/PETSc/build/petsc-3.7.6/src/snes/impls/ls/ls.c
>> >
>> > [0]PETSC ERROR: #5 SNESSolve() line 4005 in
>> /rugpfs/fs0/home/zfrentz/PETSc/build/petsc-3.7.6/src/snes/
>> interface/snes.c
>> >
>> > [0]PETSC ERROR: #6 solveWarp3D() line 659 in
>> /ru-auth/local/home/zfrentz/Code/OpticalFlow/working/October6_2017/mshs.c
>> >
>> >
>> > On Tue, Oct 3, 2017 at 5:37 PM, Jed Brown <jed at jedbrown.org> wrote:
>> > Always always always send the whole error message.
>> >
>> > "zakaryah ." <zakaryah at gmail.com> writes:
>> >
>> > > I tried -snes_compare_explicit, and got the following error:
>> > >
>> > > [0]PETSC ERROR: Invalid argument
>> > >
>> > > [0]PETSC ERROR: Matrix not generated from a DMDA
>> > >
>> > > What am I doing wrong?
>> > >
>> > > On Tue, Oct 3, 2017 at 10:08 AM, Jed Brown <jed at jedbrown.org> wrote:
>> > >
>> > >> Barry Smith <bsmith at mcs.anl.gov> writes:
>> > >>
>> > >> >> On Oct 3, 2017, at 5:54 AM, zakaryah . <zakaryah at gmail.com>
>> wrote:
>> > >> >>
>> > >> >> I'm still working on this. I've made some progress, and it looks
>> like
>> > >> the issue is with the KSP, at least for now. The Jacobian may be
>> > >> ill-conditioned. Is it possible to use -snes_test_display during an
>> > >> intermediate step of the analysis? I would like to inspect the
>> Jacobian
>> > >> after several solves have already completed,
>> > >> >
>> > >> > No, our currently code for testing Jacobians is poor quality and
>> > >> poorly organized. Needs a major refactoring to do things properly.
>> Sorry
>> > >>
>> > >> You can use -snes_compare_explicit or -snes_compare_coloring to
>> output
>> > >> differences on each Newton step.
>> > >>
>> >
>> >
>>
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20171010/b3a88979/attachment-0001.html>
More information about the petsc-users
mailing list