[petsc-users] Increasing norm with finer mesh
Weizhuo Wang
weizhuo2 at illinois.edu
Mon Oct 8 18:10:57 CDT 2018
x-axis is n - the number of grids in one axis, for example if the grid is
100*100, then n=100. I believe the preconditioner is ksp, since it is set
from options of a default KSP solver. I can send you the code if you want
to take a glimpse at it.
On Mon, Oct 8, 2018 at 5:58 PM Mark Adams <mfadams at lbl.gov> wrote:
> And what is the x-axis? And what solver (preconditioner) are you using w/o
> LU (2nd graph)?
>
> On Mon, Oct 8, 2018 at 6:47 PM Matthew Knepley <knepley at gmail.com> wrote:
>
>> On Mon, Oct 8, 2018 at 6:13 PM Weizhuo Wang <weizhuo2 at illinois.edu>
>> wrote:
>>
>>> Sorry I was caught up with midterms for the last few days. I tried the
>>> lu decomposition today and the 2-norm is pretty stable at ~ 10^-15, which
>>> is expected for double precision. Since the discretization error is so
>>> small, it would be reasonable to assume the rest is majority the algebraic
>>> error.
>>>
>>
>> What are you plotting? It looks like only the algebraic error or
>> residual. There is absolutely no way your discretization error is 1e-14.
>>
>> Thanks,
>>
>> Matt
>>
>>
>>> Then looking at the result without the -pc_type lu flag(second graph),
>>> the error is asymptoting to a constant several magnitudes larger than the
>>> tolerance set for the solver. (atol=1e-12, rtol=1e-9) Is this the expected
>>> behavior? Shouldn't it decrease with finer grid?
>>> [image: LU.png]
>>> [image: Total.png]
>>>
>>> On Tue, Oct 2, 2018 at 6:52 PM Matthew Knepley <knepley at gmail.com>
>>> wrote:
>>>
>>>> On Tue, Oct 2, 2018 at 5:26 PM Weizhuo Wang <weizhuo2 at illinois.edu>
>>>> wrote:
>>>>
>>>>> I didn't specify a tolerance, it was using the default tolerance.
>>>>> Doesn't the asymptoting norm implies finer grid won't help to get finer
>>>>> solution?
>>>>>
>>>>
>>>> There are two things going on in your test, discretization error
>>>> controlled by the grid, and algebraic error controlled by the solver. This
>>>> makes it difficult to isolate what is happening. However, it seems clear
>>>> that your plot is looking at algebraic error. You can confirm this by using
>>>>
>>>> -pc_type lu
>>>>
>>>> for the solve. Then all you have is discretization error.
>>>>
>>>> Thanks,
>>>>
>>>> Matt
>>>>
>>>>
>>>>> Mark Adams <mfadams at lbl.gov> :
>>>>>
>>>>>>
>>>>>>
>>>>>> On Tue, Oct 2, 2018 at 5:04 PM Weizhuo Wang <weizhuo2 at illinois.edu>
>>>>>> wrote:
>>>>>>
>>>>>>> Yes I was using one norm in my Helmholtz code, the example code used
>>>>>>> 2 norm. But now I am using 2 norm in both code.
>>>>>>>
>>>>>>> /*
>>>>>>> Check the error
>>>>>>> */
>>>>>>> ierr = VecAXPY(x,-1.0,u); CHKERRQ(ierr);
>>>>>>> ierr = VecNorm(x,NORM_1,&norm); CHKERRQ(ierr);
>>>>>>> ierr = KSPGetIterationNumber(ksp,&its); CHKERRQ(ierr);
>>>>>>> ierr = PetscPrintf(PETSC_COMM_WORLD,"Norm of error %g iterations
>>>>>>> %D\n",(double)norm/(m*n),its); CHKERRQ(ierr);
>>>>>>>
>>>>>>> I made a plot to show the increase:
>>>>>>>
>>>>>>
>>>>>>
>>>>>> FYI, this is asymptoting to a constant. What solver tolerance are
>>>>>> you using?
>>>>>>
>>>>>>
>>>>>>>
>>>>>>> [image: Norm comparison.png]
>>>>>>>
>>>>>>> Mark Adams <mfadams at lbl.gov>:
>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On Tue, Oct 2, 2018 at 2:24 PM Weizhuo Wang <weizhuo2 at illinois.edu>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> The example code and makefile are attached below. The whole thing
>>>>>>>>> started as I tried to build a Helmholtz solver, and the mean error
>>>>>>>>> (calculated by: sum( | numerical_sol - analytical_sol | / analytical_sol )
>>>>>>>>> )
>>>>>>>>>
>>>>>>>>
>>>>>>>> This is a one norm. If you use max (instead of sum) then you don't
>>>>>>>> need to scale. You do have to be careful about dividing by (near) zero.
>>>>>>>>
>>>>>>>>
>>>>>>>>> increases as I use finer and finer grids.
>>>>>>>>>
>>>>>>>>
>>>>>>>> What was the rate of increase?
>>>>>>>>
>>>>>>>>
>>>>>>>>> Then I looked at the example 12 (Laplacian solver) which is
>>>>>>>>> similar to what I did to see if I have missed something. The example is
>>>>>>>>> using 2_norm. I have made some minor modifications (3 places) on the code,
>>>>>>>>> you can search 'Modified' in the code to see them.
>>>>>>>>>
>>>>>>>>> If this helps: I configured the PETSc to use real and double
>>>>>>>>> precision. Changed the name of the example code from ex12.c to ex12c.c
>>>>>>>>>
>>>>>>>>> Thanks for all your reply!
>>>>>>>>>
>>>>>>>>> Weizhuo
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Smith, Barry F. <bsmith at mcs.anl.gov>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>> Please send your version of the example that computes the mean
>>>>>>>>>> norm of the grid; I suspect we are talking apples and oranges
>>>>>>>>>>
>>>>>>>>>> Barry
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> > On Oct 1, 2018, at 7:51 PM, Weizhuo Wang <weizhuo2 at illinois.edu>
>>>>>>>>>> wrote:
>>>>>>>>>> >
>>>>>>>>>> > I also tried to divide the norm by m*n , which is the number of
>>>>>>>>>> grids, the trend of norm still increases.
>>>>>>>>>> >
>>>>>>>>>> > Thanks!
>>>>>>>>>> >
>>>>>>>>>> > Weizhuo
>>>>>>>>>> >
>>>>>>>>>> > Matthew Knepley <knepley at gmail.com>
>>>>>>>>>> > On Mon, Oct 1, 2018 at 6:31 PM Weizhuo Wang <
>>>>>>>>>> weizhuo2 at illinois.edu> wrote:
>>>>>>>>>> > Hi!
>>>>>>>>>> >
>>>>>>>>>> > I'm recently trying out the example code provided with the KSP
>>>>>>>>>> solver (ex12.c). I noticed that the mean norm of the grid increases as I
>>>>>>>>>> use finer meshes. For example, the mean norm is 5.72e-8 at m=10 n=10.
>>>>>>>>>> However at m=100, n=100, mean norm increases to 9.55e-6. This seems counter
>>>>>>>>>> intuitive, since most of the time error should decreases when using finer
>>>>>>>>>> grid. Am I doing this wrong?
>>>>>>>>>> >
>>>>>>>>>> > The norm is misleading in that it is the l_2 norm, meaning just
>>>>>>>>>> the sqrt of the sum of the squares of
>>>>>>>>>> > the vector entries. It should be scaled by the volume element
>>>>>>>>>> to approximate a scale-independent
>>>>>>>>>> > norm (like the L_2 norm).
>>>>>>>>>> >
>>>>>>>>>> > Thanks,
>>>>>>>>>> >
>>>>>>>>>> > Matt
>>>>>>>>>> >
>>>>>>>>>> > Thanks!
>>>>>>>>>> > --
>>>>>>>>>> > Wang Weizhuo
>>>>>>>>>> >
>>>>>>>>>> >
>>>>>>>>>> > --
>>>>>>>>>> > What most experimenters take for granted before they begin
>>>>>>>>>> their experiments is infinitely more interesting than any results to which
>>>>>>>>>> their experiments lead.
>>>>>>>>>> > -- Norbert Wiener
>>>>>>>>>> >
>>>>>>>>>> > https://www.cse.buffalo.edu/~knepley/
>>>>>>>>>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__www.cse.buffalo.edu_-7Eknepley_&d=DwMFaQ&c=OCIEmEwdEq_aNlsP4fF3gFqSN-E3mlr2t9JcDdfOZag&r=hsLktHsuxNfF6zyuWGCN8x-6ghPYxhx4cV62Hya47oo&m=KjmDEsZ6w8LEry7nlv3Bw7-pczqWbKGueFU59VoIWZg&s=tEv9-AHhL2CIlmmVos0gFa5PAY9oMG3aTQlnfi62ivA&e=>
>>>>>>>>>> >
>>>>>>>>>> >
>>>>>>>>>> > --
>>>>>>>>>> > Wang Weizhuo
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>> Wang Weizhuo
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Wang Weizhuo
>>>>>>>
>>>>>>
>>>>>
>>>>> --
>>>>> Wang Weizhuo
>>>>>
>>>>
>>>>
>>>> --
>>>> What most experimenters take for granted before they begin their
>>>> experiments is infinitely more interesting than any results to which their
>>>> experiments lead.
>>>> -- Norbert Wiener
>>>>
>>>> https://www.cse.buffalo.edu/~knepley/
>>>> <https://urldefense.proofpoint.com/v2/url?u=http-3A__www.cse.buffalo.edu_-7Eknepley_&d=DwMFaQ&c=OCIEmEwdEq_aNlsP4fF3gFqSN-E3mlr2t9JcDdfOZag&r=hsLktHsuxNfF6zyuWGCN8x-6ghPYxhx4cV62Hya47oo&m=iJKTzLwSz91kwSX1J06Rddg52LnTKFxnqz9HZLA_KCk&s=bDOGCOeG7amWuKES2hcckRWXl8dp01UDiunq2BCVd0A&e=>
>>>>
>>>
>>>
>>> --
>>> Wang Weizhuo
>>>
>>
>>
>> --
>> What most experimenters take for granted before they begin their
>> experiments is infinitely more interesting than any results to which their
>> experiments lead.
>> -- Norbert Wiener
>>
>> https://www.cse.buffalo.edu/~knepley/
>> <https://urldefense.proofpoint.com/v2/url?u=http-3A__www.cse.buffalo.edu_-7Eknepley_&d=DwMFaQ&c=OCIEmEwdEq_aNlsP4fF3gFqSN-E3mlr2t9JcDdfOZag&r=hsLktHsuxNfF6zyuWGCN8x-6ghPYxhx4cV62Hya47oo&m=iJKTzLwSz91kwSX1J06Rddg52LnTKFxnqz9HZLA_KCk&s=bDOGCOeG7amWuKES2hcckRWXl8dp01UDiunq2BCVd0A&e=>
>>
>
--
Wang Weizhuo
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20181008/55bc4882/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: Total.png
Type: image/png
Size: 39046 bytes
Desc: not available
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20181008/55bc4882/attachment-0003.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: Norm comparison.png
Type: image/png
Size: 87887 bytes
Desc: not available
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20181008/55bc4882/attachment-0004.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: LU.png
Type: image/png
Size: 38656 bytes
Desc: not available
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20181008/55bc4882/attachment-0005.png>
More information about the petsc-users
mailing list