[petsc-users] Convergence rate for spatially varying Helmholtz system

Amneet Bhalla mail2amneet at gmail.com
Thu Sep 30 17:16:38 CDT 2021


>> If you want to solve systems accurately, you should non-dimensionalize
the system prior to discretization. This would mean that
your C and b have elements in the [1, D] range, where D is the dynamic
range of your problem, say 1e4, rather than these huge
numbers you have now.

@Matt: We have done non-dimensionalization and the diagonal matrix ranges
from 1 to 1e4 now. Still it takes 4-5 iterations to converge for the
non-dimensional diagonal matrix. The convergence trend is looking much
better now, though:

Residual norms for temperature_ solve.

  0 KSP preconditioned resid norm 4.724547545716e-04 true resid norm
2.529423250889e+00 ||r(i)||/||b|| 4.397759655853e-05

  1 KSP preconditioned resid norm 6.504853596318e-06 true resid norm
2.197130494439e-02 ||r(i)||/||b|| 3.820021755431e-07

  2 KSP preconditioned resid norm 7.733420341215e-08 true resid norm
3.539290481432e-04 ||r(i)||/||b|| 6.153556501117e-09

  3 KSP preconditioned resid norm 6.419092250844e-10 true resid norm
5.220398494466e-06 ||r(i)||/||b|| 9.076400273607e-11

  4 KSP preconditioned resid norm 5.095955157158e-12 true resid norm
2.484163999489e-08 ||r(i)||/||b|| 4.319070053474e-13

  5 KSP preconditioned resid norm 6.828200916501e-14 true resid norm
2.499229854610e-10 ||r(i)||/||b|| 4.345264170970e-15

Linear temperature_ solve converged due to CONVERGED_RTOL iterations 5



Only when all the equations are scaled individually the convergence is
achieved in a single iteration. In the above, all equations are scaled
using the same non-dimensional parameter. Do you think this is reasonable
or do you expect the diagonal system to converge in a single iteration
irrespective of the range of diagonal entries?

@Barry:

>
>
>    What is the result of -ksp_view on the solve?
>

KSP Object: (temperature_) 1 MPI processes

  type: gmres

    restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization
with one step of iterative refinement when needed

    happy breakdown tolerance 1e-30

  maximum iterations=1000, nonzero initial guess

  tolerances:  relative=1e-12, absolute=1e-50, divergence=10000.

  left preconditioning

  using PRECONDITIONED norm type for convergence test

PC Object: (temperature_) 1 MPI processes

  type: shell

    IEPSemiImplicitHierarchyIntegrator::helmholtz_precond::Temperature

  linear system matrix = precond matrix:

  Mat Object: 1 MPI processes

    type: shell
    rows=1, cols=1


>
>    The way you describe your implementation it does not sound like
> standard PETSc practice.
>

Yes, we do it differently in IBAMR. Succinctly, the main solver is a
matrix-free one, whereas the preconditioner is a FAC multigrid solver with
its bottom solver formed on the coarsest level of AMR grid using PETSc
(matrix-based KSP).

In the above -ksp_view temperature_ is the matrix-free KSP solver and
IEPSemiImplicitHierarchyIntegrator::helmholtz_precond
is the FAC preconditioner.

>
> With PETSc using a matrix-free operation mA and a matrix from which KSP
> will build the preconditioner  A one uses  KSPSetOperator(ksp,mA,A); and
> then just selects the preconditioner with -pc_type xxx  For example to use
> Jacobi preconditioning one uses -pc_type jacobi (note that this only uses
> the diagonal of A, the rest of A is never used).
>

We run -pc_type jacobi on the bottom solver of the FAC preconditioner.

>
> If you wish to precondition mA by fully solving with the matrix A one can
> use -ksp_monitor_true_residual -pc_type ksp -ksp_ksp_type yyy -ksp_pc_type
> xxx  -ksp_ksp_monitor_true_residual with, for example, yyy of richardson
> and xxx of jacobi
>

Yes, this is what we do.

>
>   Barry
>
>
>
>
> To verify that I am indeed solving a diagonal system I printed the PETSc
> matrix from the preconditioner and viewed it in Matlab. It indeed shows it
> to be a diagonal system. Attached is the plot of the spy command on the
> printed matrix. The matrix in binary form is also attached.
>
> My understanding is that because the C coefficient is varying in 4 orders
> of magnitude, i.e., Max(C)/Min(C) ~ 10^4, the matrix is poorly scaled. When
> I rescale my matrix by 1/C then the system converges in 1 iteration as
> expected. Is my understanding correct, and that scaling 1/C should be done
> even for a diagonal system?
>
> When D is non-zero, then scaling by 1/C seems to be very inconvenient as D
> is stored as side-centered data for the matrix free solver.
>
> In the case that I do not scale my equations by 1/C, is there some solver
> setting that improves the convergence rate? (With D as non-zero, I have
> also tried gmres as the ksp solver in the matrix-based preconditioner to
> get better performance, but it didn't matter much.)
>
>
> Thanks,
> Ramakrishnan Thirumalaisamy
> San Diego State University.
> <Temperature_fill.pdf><matrix_temperature>
>
>
>

-- 
--Amneet
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20210930/21e0b584/attachment.html>


More information about the petsc-users mailing list