Matrixfree modification

Johann Schlamp schlamp at in.tum.de
Wed Oct 29 17:01:18 CDT 2008


Johann Schlamp wrote:
> Matthew Knepley wrote:
>   
>> On Tue, Oct 28, 2008 at 4:35 PM, Johann Schlamp <schlamp at in.tum.de> wrote:
>>   
>>     
>>> Barry Smith wrote:
>>>     
>>>       
>>>>    If you have the "matrix-based" version that you can run on the same
>>>> problem then
>>>> look at the residuals computed in the 0th and 1st step of the Krylov
>>>> method
>>>> to see how they are different in the two runs.
>>>>       
>>>>         
>>> The residuals in the 0th and 1st step of the linear solver are
>>> 0.00363413
>>> 0.00189276
>>> for the "matrix-based" version, and
>>> 0.00363413
>>> 1.27858e-17
>>>     
>>>       
>> No this looks wrong. Shouldn't these be identical? It looks like you
>> are wiping out the input vector instead.
>>
>>    Matt
>>   
>>     

After some extended testing, I think I really messed up the input vector
somehow. I dare say I can take it from here. If not - I know where to
get competent support. :)

Thanks for all help!


Johann


> Yes, they should be identical. That's excactly the point.
> My naive interpretation was that maybe only the residual's calculation
> is wrong.
>
> After thinking again, I believe I have misunderstood Barry's hint on
> copying the 'u'.
> Apparently, the user-provided Jacobian calculation method gets a
> different input than my customized MatMult method called within KSP on
> my matrixfree context. But they are both needed for my approach, right?
> I haven't thought of that in advance, so it will take me some time to
> rewrite the code. I will report again tomorrow (it's 11 o'clock pm at my
> site).
>
>
> Thanks for your help!
>
>
> Johann
>
>   
>>> for the matrix-free version. That's definitely smaller than epsilon, so
>>> it converges. By the way, the "matrix-based" version doesn't converge
>>> either, as I was not using a preconditioner for getting comparable results.
>>>
>>> Simply thought: the residual is in the magnitude of machine accuracy, so
>>> I would have concluded that the calculation of the residual (y-A*x)
>>> results in zero with respect to some rounding errors. Unfortunately, I
>>> don't completely understand the PETSc code for calculating the residual
>>> and therfore cannot verify it for my new matrix structure.
>>>
>>>     
>>>       
>>>>    Perhaps your matrix-free is corrupting memory? Run with -malloc_debug
>>>> and put a CHKMEMQ; at the end of your matrix free multiply. Or better
>>>> run through
>>>> valgrind,. www.valgrind.org
>>>>       
>>>>         
>>> Interesting thought! I will check this tomorrow.
>>>
>>>
>>> Johann
>>>
>>>
>>>     
>>>       
>>>> On Oct 28, 2008, at 3:39 PM, Johann Schlamp wrote:
>>>>
>>>>       
>>>>         
>>>>> Thanks for your reply, Barry!
>>>>>
>>>>> Barry Smith wrote:
>>>>>         
>>>>>           
>>>>>>   Your general approach seems fine. I would put a break point in
>>>>>> your MatMult routine for a tiny problem and verify that the input
>>>>>> vector for u and
>>>>>> x are what you expect (and then at the end of the function make sure
>>>>>> the
>>>>>> output y is what you expect).
>>>>>>           
>>>>>>             
>>>>> I have already done this, everything is as expected.
>>>>>
>>>>>         
>>>>>           
>>>>>>   Here is my guess: The matrix vector product is J(u)*x; when your
>>>>>> calculation
>>>>>> is done you need to use the correct u value. This is the vector that is
>>>>>> passed into your "empty method as user-provided Jacobian method".
>>>>>> In your "empty method as user-provided Jacobian method" you should make
>>>>>> a copy of this vector so that you have it for each of the matrix
>>>>>> vector products.
>>>>>> At each Newton step your "empty method as user-provided Jacobian
>>>>>> method"
>>>>>> will be called and you will copy the new value of u over.
>>>>>>           
>>>>>>             
>>>>> It took me some time to understand what you mean. But after that I got
>>>>> excited trying it out. :)
>>>>> It definitely had some effect on the nonlinear solution, but the linear
>>>>> solver still finishes after one iteration with the way too small
>>>>> residual norm.
>>>>>
>>>>> Anyway, thanks for the anticipated bugfix!
>>>>>
>>>>> Do you have any further suggestions?
>>>>>
>>>>>
>>>>> Best regards,
>>>>> Johann
>>>>>
>>>>>
>>>>>         
>>>>>           
>>>>>> On Oct 28, 2008, at 10:08 AM, Johann Schlamp <schlamp at in.tum.de> wrote:
>>>>>>
>>>>>>           
>>>>>>             
>>>>>>> Hello folks,
>>>>>>>
>>>>>>> I have to implement some advanced matrixfree calculation.
>>>>>>>
>>>>>>> Usually, we use our own analytical method for calculating the Jacobian
>>>>>>> matrix and provide it via SNESSetJacobian(). For complex problems, the
>>>>>>> global matrix takes too much memory. So here's the idea:
>>>>>>>
>>>>>>> First, I set some empty method as user-provided Jacobian method.
>>>>>>> After a
>>>>>>> corresponding call, SNES hopefully thinks the Jacobian matrix got
>>>>>>> calculated right. Then it will use it in some multiplication like
>>>>>>> y=A*x.
>>>>>>> For that I created the matrix A with MatCreateShell() and set up my
>>>>>>> own
>>>>>>> MatMult method. This new method iterates over my grid, calculates
>>>>>>> local
>>>>>>> Jacobians, multiplies them with the corresponding part of the vector x
>>>>>>> and writes them into y. After this full iteration, it should look
>>>>>>> like I
>>>>>>> had the full global Jacobian and multiplied it with the full vector x.
>>>>>>> In y, the result will be the same.
>>>>>>> The matrix A and the empty Jacobian method are set through
>>>>>>> SNESSetJacobian().
>>>>>>>
>>>>>>> I have implemented some unittests for generally proofing the idea,
>>>>>>> seems
>>>>>>> to work.
>>>>>>>
>>>>>>> But if I run a complete simulation, the KSP converges after the first
>>>>>>> iteration with a residual norm of about 1.0e-18, which is
>>>>>>> definitely not
>>>>>>> right.
>>>>>>>
>>>>>>>
>>>>>>> Now my question: does this procedure make sense at all, and if so - is
>>>>>>> it possible that just the calculation of the residual norm goes wrong
>>>>>>> due to my new matrix structure? I searched the PETSc code, but I
>>>>>>> wasn't
>>>>>>> able to find a proof or solution for that.
>>>>>>>
>>>>>>> Any help would be appreciated.
>>>>>>>
>>>>>>>
>>>>>>> Best regards,
>>>>>>> Johann Schlamp
>>>>>>>
>>>>>>>
>>>>>>>             
>>>>>>>               
>>>     
>>>       
>>
>>   
>>     
>
>   




More information about the petsc-users mailing list