[petsc-dev] Using SOR preconditioner for matshell

Matthew Knepley knepley at gmail.com
Thu Nov 11 06:32:13 CST 2021


On Wed, Nov 10, 2021 at 11:16 PM Han Tran <hantran at cs.utah.edu> wrote:

> Hi,
> I am having trouble with the implementation of SOR preconditioner with
> matshell. I am using Fortran 90.
>
> (1) First, I implement a function called MatSOR_shell which has the same
> signature as MatSOR function, i.e.
> MatSOR_wrapper(A, b, omega, flag, shift, its, lits, x, ierr)
> This function returns a PETSc vector x which is the next iteration of the
> input vector x using the SOR algorithm.
>
> (2) Next, I set matrix operation for my shell matrix, i.e.
> CALL MatShellSetOperation(my_matshell, MATOP_SOR, MatSOR_wrapper, ierr)
>
> To test if it works in sequential (before implementing my own way to
> compute the vector x using SOR algorithm), I want to use PETSc function
> MatSOR inside my MatSOR_wrapper as follows
>
> SUBROUTINE MatSOR_wrapper(A, b, omega, flag, shift, its, lits, x, ierr)
>    Create, set, and assemble PETSc matrix *aa*
>    Create, set, and assemble PETSc rhs vector *bb*
>    CALL MatSOR(*aa*, *bb*, omega, flag, shift, its, lits, x, ierr)
> END SUBROUTINE MatSOR_wrapper
>
> With the above wrapper function, I suppose that the MatSOR function
> computes and return the vector x based on the input matrix *aa* and rhs
> vector *bb *using SOR algorithm.
>
> I run my code to solve a linear system Ax = b using my matshell SOR
> preconditioner. I used GMRES for KSPSolve, and set preconditioner at
> runtime as:
> -pc_type sor -pc_sor_local_forward -ksp_monitor
>
> The solving converts after exactly 2 iterations, with the residual norm as
>   0 KSP Residual norm 1.147215489446e+06
>   1 KSP Residual norm 3.798348914902e-11
> Linear solve converged due to CONVERGED_RTOL iterations 1
>
> However, the converged solution is wrong. For comparison, I solved the
> same above system without using matshell (i.e. using standard PETSc
> assembled matrix) and also using -pc_type sor -pc_sor_local_forward, with
> the following results
>   0 KSP Residual norm 1.147215489446e+06
>   1 KSP Residual norm 7.296837190230e+05
>   2 KSP Residual norm 4.881295368957e+05
>   3 KSP Residual norm 3.176644114439e+05
>   4 KSP Residual norm 1.941135089932e+05
>   5 KSP Residual norm 2.870541410834e-10
> Linear solve converged due to CONVERGED_RTOL iterations 5
>

I would run a problem small enough that you can print all the vectors to
the screen. You know that the residual diverges at iterate 1, so I would
just print vectors
until I found the difference. For example, print b and x for each iterate
of your SOR and the original SOR.

  Thanks,

     Matt


> It can be seen that with my matshell SOR, the solver computes correctly
> the residual norm of the first iteration (i.e. both methods give the same
> value 1.147215489446e+06) but not the second iteration. I do not
> understand why the solver quickly gives a very small residual norm and
> stops the solving because the relative decrease of residual norm is smaller
> then RTOL. I am quite sure that the matrix aa and rhs vector bb provided
> inside MatSOR_wrapper are correct because I did compare with the assembled
> matrix used in second method (i.e. the assembled-matrix method). I also
> verified that the MatSOR_wrapper is called 2 times during the solving
> (because 2 iterations are done) and it returns a correct vector x with an
> input vector x=0 using forward SOR algorithm.
>
> My questions are:
> (1) Did I miss providing something else inside my function MatSOR_wrapper,
> or any other options for the solver?
> (2) For parallel, as shown in PETSc manual at
> https://petsc.org/release/docs/manualpages/PC/PCSOR.html as "Not a true
> parallel SOR, in parallel this implementation corresponds to block Jacobi
> with SOR on each block”. If I use the same wrapper as shown above, my
> matrix aa should be the diagonal block (sequential matrix, diagonal
> submatrix owned by my proc), and my vector bb should be the local part of
> the global rhs vector. Is it correct?
>
> Thank you very much for your help. I am very sorry for my lengthy email. I
> just want to explain the problem as much as I can. I look forward to your
> answers.
>
> Han Tran
>
>
>

-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20211111/b4fd744d/attachment.html>


More information about the petsc-dev mailing list