[petsc-users] Convergence rate for spatially varying Helmholtz system

Barry Smith bsmith at petsc.dev
Thu Sep 30 09:32:30 CDT 2021


   

> On Sep 29, 2021, at 5:37 PM, Ramakrishnan Thirumalaisamy <rthirumalaisam1857 at sdsu.edu> wrote:
> 
> Hi all,
> 
> I am trying to solve the Helmholtz equation for temperature T:
> 
> (C I  + Div D grad) T = f
> 
> in IBAMR, in which C is the spatially varying diagonal entries, and D is the spatially varying diffusion coefficient.   I use a matrix-free solver with matrix-based PETSc preconditioner. For the matrix-free solver, I use gmres solver and for the matrix based preconditioner, I use Richardson ksp + Jacobi as a preconditioner. As the simulation progresses, the iterations start to increase. To understand the cause, I set D to be zero, which results in a diagonal system:
>   
> C T = f.
> 
> This should result in convergence within a single iteration, but I get convergence in 3 iterations.
> 
> Residual norms for temperature_ solve.
>   0 KSP preconditioned resid norm 4.590811647875e-02 true resid norm 2.406067589273e+09 ||r(i)||/||b|| 4.455533946945e-05
>   1 KSP preconditioned resid norm 2.347767895880e-06 true resid norm 1.210763896685e+05 ||r(i)||/||b|| 2.242081505717e-09
>   2 KSP preconditioned resid norm 1.245406571896e-10 true resid norm 6.328828824310e+00 ||r(i)||/||b|| 1.171966730978e-13
> Linear temperature_ solve converged due to CONVERGED_RTOL iterations 2
> 

   What is the result of -ksp_view on the solve? 

   The way you describe your implementation it does not sound like standard PETSc practice. 

With PETSc using a matrix-free operation mA and a matrix from which KSP will build the preconditioner  A one uses  KSPSetOperator(ksp,mA,A); and then just selects the preconditioner with -pc_type xxx  For example to use Jacobi preconditioning one uses -pc_type jacobi (note that this only uses the diagonal of A, the rest of A is never used).

If you wish to precondition mA by fully solving with the matrix A one can use -ksp_monitor_true_residual -pc_type ksp -ksp_ksp_type yyy -ksp_pc_type xxx  -ksp_ksp_monitor_true_residual with, for example, yyy of richardson and xxx of jacobi

  Barry




> To verify that I am indeed solving a diagonal system I printed the PETSc matrix from the preconditioner and viewed it in Matlab. It indeed shows it to be a diagonal system. Attached is the plot of the spy command on the printed matrix. The matrix in binary form is also attached. 
> 
> My understanding is that because the C coefficient is varying in 4 orders of magnitude, i.e., Max(C)/Min(C) ~ 10^4, the matrix is poorly scaled. When I rescale my matrix by 1/C then the system converges in 1 iteration as expected. Is my understanding correct, and that scaling 1/C should be done even for a diagonal system?
> 
> When D is non-zero, then scaling by 1/C seems to be very inconvenient as D is stored as side-centered data for the matrix free solver. 
> 
> In the case that I do not scale my equations by 1/C, is there some solver setting that improves the convergence rate? (With D as non-zero, I have also tried gmres as the ksp solver in the matrix-based preconditioner to get better performance, but it didn't matter much.)
> 
> 
> Thanks,
> Ramakrishnan Thirumalaisamy
> San Diego State University.
> <Temperature_fill.pdf><matrix_temperature>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20210930/4c6f374d/attachment-0001.html>


More information about the petsc-users mailing list