Matrixfree modification

Barry Smith bsmith at mcs.anl.gov
Tue Oct 28 13:52:58 CDT 2008


    Your general approach seems fine. I would put a break point in
your MatMult routine for a tiny problem and verify that the input  
vector for u and
x are what you expect (and then at the end of the function make sure the
output y is what you expect).
l
    Here is my guess: The matrix vector product is J(u)*x; when your  
calculation
is done you need to use the correct u value. This is the vector that is
passed into your "empty method as user-provided Jacobian method".
In your "empty method as user-provided Jacobian method" you should make
a copy of this vector so that you have it for each of the matrix  
vector products.
At each Newton step your "empty method as user-provided Jacobian method"
will be called and you will copy the new value of u over.

    Barry


On Oct 28, 2008, at 10:08 AM, Johann Schlamp <schlamp at in.tum.de> wrote:

> Hello folks,
>
> I have to implement some advanced matrixfree calculation.
>
> Usually, we use our own analytical method for calculating the Jacobian
> matrix and provide it via SNESSetJacobian(). For complex problems, the
> global matrix takes too much memory. So here's the idea:
>
> First, I set some empty method as user-provided Jacobian method.  
> After a
> corresponding call, SNES hopefully thinks the Jacobian matrix got
> calculated right. Then it will use it in some multiplication like  
> y=A*x.
> For that I created the matrix A with MatCreateShell() and set up my  
> own
> MatMult method. This new method iterates over my grid, calculates  
> local
> Jacobians, multiplies them with the corresponding part of the vector x
> and writes them into y. After this full iteration, it should look  
> like I
> had the full global Jacobian and multiplied it with the full vector x.
> In y, the result will be the same.
> The matrix A and the empty Jacobian method are set through
> SNESSetJacobian().
>
> I have implemented some unittests for generally proofing the idea,  
> seems
> to work.
>
> But if I run a complete simulation, the KSP converges after the first
> iteration with a residual norm of about 1.0e-18, which is definitely  
> not
> right.
>
>
> Now my question: does this procedure make sense at all, and if so - is
> it possible that just the calculation of the residual norm goes wrong
> due to my new matrix structure? I searched the PETSc code, but I  
> wasn't
> able to find a proof or solution for that.
>
> Any help would be appreciated.
>
>
> Best regards,
> Johann Schlamp
>
>




More information about the petsc-users mailing list