[petsc-users] Problems about Picard and NolinearGS

Smith, Barry F. bsmith at mcs.anl.gov
Thu Dec 27 01:12:03 CST 2018


  If I understand correctly you able to provide the right hand side function but only the block diagonal part of the Jacobian (with one block per field). Is this correct? 

  When you use the block diagonal part of the Jacobian as the preconditioner for the matrix free application of the Jacobian you get very slow convergence of KSP? 

  And you are trying to use the block diagonal part of the Jacobian as the operator A(x) for the Picard iteration? This will not in general work because Picard requires an accurate application of the operator A(), using just the block diagonal part of A() doesn't even provide you with a consistent nonlinear system to solve. 

  I think that your fundamental problem is that the block diagonal part of the Jacobian is simply a very poor approximation to the true Jacobian and hence any preconditioner built from it is a poor preconditioner. I think you need to bit the bullet and compute the full Jacobian including the terms that couple between the different fields, in this way you get a good approximation to the true Jacobian and can solve each KSP linear efficiently. How difficult would it be to compute these other terms in the Jacobian?

  Barry

> On Dec 26, 2018, at 9:24 PM, Yingjie Wu <yjwu16 at gmail.com> wrote:
> 
> Thank you very much for your previous reply.
> I pay attention to Picard and NonlinearGS method because I use snes_mf method with preconditioning matrix to solve my problem. The quality of preconditioning matrix and initial values seriously affect the convergence of my problem. Because of the poor quality of the preconditioning matrix and inability to give a reasonable initial value for my problem, the convergence is not good. High-order ILU ( in my case ILU20) is needed to ensure that the linear step residual can be reduced  to ksp_rtol 1e-2. In the future, the program may expand more complex and convergence may be worse. However, Picard and NonlinearGS are two method to ensure convergence. Although the convergence rate is very slow, the results of these two method can be used to provide a reasonable initial value for my program. So I need a solver similar to Picard or GS's iterative method, which is very useful for providing initial guess or comparing with the original program.
> I tried Picard method, which split original residual function SNESComputeFunction F(x)=A(x)x - b(x) in to two part, give the right-hand part of the equation b(x) to SNESPicardComputeFunction,and then transfer the A(x) to SNESPicardComputeJacobian(in snes_mf , I use A(x) as preconditioning matrix) . Because A(x) only include the relations within a single physical field, it does not consider the relationship between physical fields. So it is equivalent to the diagonal block part of the Jacobian matrix J.
> I really need some advice on my question.
> 
> Thanks,
> Yingjie
> 
> Smith, Barry F. <bsmith at mcs.anl.gov> 于2018年12月26日周三 下午4:23写道:
> 
>    Try with -snes_mf_operator from the manual page of SNESSetPicard() this indicates that  with Newton's method using A(x^{n}) to construct the preconditioner.
> 
> > On Dec 26, 2018, at 7:48 AM, Yingjie Wu via petsc-users <petsc-users at mcs.anl.gov> wrote:
> > 
> > Dear Petsc developers:
> > Hi,
> > 1. I tried to use the Picard solver in Petsc, but the program didn't converge. My program is still a thermal program that contains multiple physical fields, and is a PDEs' problem. The error message is as follows. The reason I use Picard is that it can guarantee convergence(though slow and expensive). I follow the ex15.c, but I don't use DM to organize the solution vector. So I try the SNESSetPicard().
> > 0 SNES Function norm 2.91302e+08 
> >     0 KSP Residual norm 5.79907e+08 
> >     1 KSP Residual norm 1.46843e-05 
> >   Linear solve converged due to CONVERGED_RTOL iterations 1
> >   1 SNES Function norm 2.891e+08 
> >     0 KSP Residual norm 5.5989e+08 
> >     1 KSP Residual norm 4.21314e-06 
> >   Linear solve converged due to CONVERGED_RTOL iterations 1
> >   2 SNES Function norm 2.78289e+08 
> >     0 KSP Residual norm 5.53553e+08 
> >     1 KSP Residual norm 2.04076e-05 
> >   Linear solve converged due to CONVERGED_RTOL iterations 1
> >   3 SNES Function norm 2.77833e+08 
> >     0 KSP Residual norm 5.52907e+08 
> >     1 KSP Residual norm 2.09919e-05 
> >   Linear solve converged due to CONVERGED_RTOL iterations 1
> >   4 SNES Function norm 2.77821e+08 
> >     0 KSP Residual norm 5.52708e+08 
> >     1 KSP Residual norm 2.08677e-05 
> >   Linear solve converged due to CONVERGED_RTOL iterations 1
> > Nonlinear solve did not converge due to DIVERGED_LINE_SEARCH iterations 4
> > SNES Object: 1 MPI processes
> >   type: newtonls
> >   maximum iterations=50, maximum function evaluations=10000
> >   tolerances: relative=1e-08, absolute=1e-50, solution=1e-08
> >   total number of linear solver iterations=5
> >   total number of function evaluations=34
> >   norm schedule ALWAYS
> >   SNESLineSearch Object: 1 MPI processes
> >     type: bt
> >       interpolation: cubic
> >       alpha=1.000000e-04
> >     maxstep=1.000000e+08, minlambda=1.000000e-12
> >     tolerances: relative=1.000000e-08, absolute=1.000000e-15, lambda=1.000000e-08
> >     maximum iterations=40
> >   KSP Object: 1 MPI processes
> >     type: gmres
> >       restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement
> >       happy breakdown tolerance 1e-30
> >     maximum iterations=10000, initial guess is zero
> >     tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
> >     left preconditioning
> >     using PRECONDITIONED norm type for convergence test
> >   PC Object: 1 MPI processes
> >     type: lu
> >       out-of-place factorization
> >       tolerance for zero pivot 2.22045e-14
> >       matrix ordering: nd
> >       factor fill ratio given 5., needed 5.48356
> >         Factored matrix follows:
> >           Mat Object: 1 MPI processes
> >             type: seqaij
> >             rows=11368, cols=11368
> >             package used to perform factorization: petsc
> >             total: nonzeros=234554, allocated nonzeros=234554
> >             total number of mallocs used during MatSetValues calls =0
> >               not using I-node routines
> >     linear system matrix = precond matrix:
> >     Mat Object: 1 MPI processes
> >       type: seqaij
> >       rows=11368, cols=11368
> >       total: nonzeros=42774, allocated nonzeros=56840
> >       total number of mallocs used during MatSetValues calls =0
> >         not using I-node routines
> > Are there any other examples of Picard methods? I'm very interested in this method.
> > 
> > 2. I found that in ex15.c and ex19.c use the NonlinearGS. I know it's a iterative method. I don't know how to use this method in above examples.  As for as I know, NonlinearGS is an iterative method parallel to subspace method. NonlinearGS should not be required if subspace methods are used. 
> > 
> > Thanks,
> > Yingjie
> >  
> > 
> 



More information about the petsc-users mailing list