Matrixfree modification
Barry Smith
bsmith at mcs.anl.gov
Tue Oct 28 15:59:07 CDT 2008
If you have the "matrix-based" version that you can run on the
same problem then
look at the residuals computed in the 0th and 1st step of the Krylov
method
to see how they are different in the two runs.
Perhaps your matrix-free is corrupting memory? Run with -
malloc_debug
and put a CHKMEMQ; at the end of your matrix free multiply. Or better
run through
valgrind,. www.valgrind.org
Barry
On Oct 28, 2008, at 3:39 PM, Johann Schlamp wrote:
> Thanks for your reply, Barry!
>
> Barry Smith wrote:
>> Your general approach seems fine. I would put a break point in
>> your MatMult routine for a tiny problem and verify that the input
>> vector for u and
>> x are what you expect (and then at the end of the function make
>> sure the
>> output y is what you expect).
> I have already done this, everything is as expected.
>
>> Here is my guess: The matrix vector product is J(u)*x; when your
>> calculation
>> is done you need to use the correct u value. This is the vector
>> that is
>> passed into your "empty method as user-provided Jacobian method".
>> In your "empty method as user-provided Jacobian method" you should
>> make
>> a copy of this vector so that you have it for each of the matrix
>> vector products.
>> At each Newton step your "empty method as user-provided Jacobian
>> method"
>> will be called and you will copy the new value of u over.
> It took me some time to understand what you mean. But after that I got
> excited trying it out. :)
> It definitely had some effect on the nonlinear solution, but the
> linear
> solver still finishes after one iteration with the way too small
> residual norm.
>
> Anyway, thanks for the anticipated bugfix!
>
> Do you have any further suggestions?
>
>
> Best regards,
> Johann
>
>
>> On Oct 28, 2008, at 10:08 AM, Johann Schlamp <schlamp at in.tum.de>
>> wrote:
>>
>>> Hello folks,
>>>
>>> I have to implement some advanced matrixfree calculation.
>>>
>>> Usually, we use our own analytical method for calculating the
>>> Jacobian
>>> matrix and provide it via SNESSetJacobian(). For complex problems,
>>> the
>>> global matrix takes too much memory. So here's the idea:
>>>
>>> First, I set some empty method as user-provided Jacobian method.
>>> After a
>>> corresponding call, SNES hopefully thinks the Jacobian matrix got
>>> calculated right. Then it will use it in some multiplication like
>>> y=A*x.
>>> For that I created the matrix A with MatCreateShell() and set up
>>> my own
>>> MatMult method. This new method iterates over my grid, calculates
>>> local
>>> Jacobians, multiplies them with the corresponding part of the
>>> vector x
>>> and writes them into y. After this full iteration, it should look
>>> like I
>>> had the full global Jacobian and multiplied it with the full
>>> vector x.
>>> In y, the result will be the same.
>>> The matrix A and the empty Jacobian method are set through
>>> SNESSetJacobian().
>>>
>>> I have implemented some unittests for generally proofing the idea,
>>> seems
>>> to work.
>>>
>>> But if I run a complete simulation, the KSP converges after the
>>> first
>>> iteration with a residual norm of about 1.0e-18, which is
>>> definitely not
>>> right.
>>>
>>>
>>> Now my question: does this procedure make sense at all, and if so
>>> - is
>>> it possible that just the calculation of the residual norm goes
>>> wrong
>>> due to my new matrix structure? I searched the PETSc code, but I
>>> wasn't
>>> able to find a proof or solution for that.
>>>
>>> Any help would be appreciated.
>>>
>>>
>>> Best regards,
>>> Johann Schlamp
>>>
>>>
>>
>
More information about the petsc-users
mailing list