[petsc-users] Different precision from MatAssembly/MatView

Matthew Knepley knepley at gmail.com
Wed Mar 25 05:03:07 CDT 2026


On Wed, Mar 25, 2026 at 4:35 AM Noam T. <dontbugthedevs at proton.me> wrote:

> I was not ware of those two flags (I should read the documentation more
> thoroughly); surely they would have been helpful.
>
> However, in the example from this discussion (single Q1 quadrilateral),
> those two flags do not show much information.
>

Ah, then you are not using Plex to compute the residual, say with
DMPlexSNESComputeResidual, so the integrate flag is never active (you do
not use PetscFEIntegrateResidual).


> -petscds_print_integrate N prints nothing, for any N value
>
> -dm_plex_print_fem N prints
>
> N = 1  nothing
> N = 2...5  --> Indices: -1 -2 0 1 2 -4 -4 -5
>

Here, MatSetClosure() is telling you what indices it expanded in the call
to MatSetValues(). There were 8 indices, but 5 of them were constrained in
the Section, so they are negative.

  Thanks,

     Matt


> which I must say I cannot make much out of.
>
> The mesh I use is read with -dm_plex_filename; also using
> -dm_plex_interpolate 1, DMSetType(...,DMPLEX),  and a few "gmsh" flags.
>
> Thanks
>
>
> On Tuesday, March 17th, 2026 at 4:54 PM, Matthew Knepley <
> knepley at gmail.com> wrote:
>
> On Tue, Mar 17, 2026 at 12:50 PM Noam T. <dontbugthedevs at proton.me> wrote:
>
>> That was the claim indeed. And it held for a while. Eventually figured
>> out the reason for the differences and, not unexpectedly, the error was on
>> my side. There was an additional operation on the small matrix that I
>> consistently missed every single time I tested.
>>
>> Values set with MatSetClosure / MatSetValues / etc are the expected ones,
>> and from MatView they are also correct, just rounded for
>> visualization/printing.
>>
>
> Cool. Do you think there is diagnostic output I could give that would make
> looking into the process easier? If you are using the Plex assembly, I have
>
> -dm_plex_print_fem 5
>
> to print out all the element vec/mats and local assembly. There is also
>
> -petscds_print_integrate 5
>
> for lower level integration output. Let me know and I can add something.
>
> Thanks,
>
> Matt
>
>> Thanks.
>>
>> On Sunday, March 15th, 2026 at 11:04 PM, Matthew Knepley <
>> knepley at gmail.com> wrote:
>>
>> On Sun, Mar 15, 2026 at 5:24 PM Noam T. <dontbugthedevs at proton.me> wrote:
>>
>>> This is very easy to check. Multiply the solution by your matrix and
>>> subtract the RHS. It must satisfy the tolerance you have for the solver. If
>>> not, there is some other problem, but I imagine it does, which is what we
>>> mean by the solution.
>>>
>>>
>>> This is indeed the case.
>>>
>>>
>>> To answer the first question, there is no reduction in precision in
>>> MatSetValues(). I am sure if you carry out the addition by hand you will
>>> see that the result is within 1ulp as it should be from the standard.
>>>
>>>
>>> I imagined this was simply a printing artifact. However, something is
>>> bugging me: the message from finite differences jacobian.
>>>
>>> The printed jacobian from finite differences is exactly the computed one
>>> (see below). If the differences from MatView were simply from printing, the
>>> error between finite difference and "in-code" should be (much closer to)
>>> zero, not 1e-5.
>>>
>>> So I did a couple more checks, avoiding calls to "XView" type of
>>> functions, and getting a single value directly:
>>>
>>> (For comparison) Computed matrix in the jacobian function (used to
>>> assemble)
>>> ----
>>> 0.531360000000 0.066670000000 0.197333333327
>>> 0.066670000000 0.535360000000 -0.003333333327
>>> 0.197333333327 -0.003333333327 0.527413333333
>>> ----
>>>
>>> ** Calling "MatGetValues" and then printing them all:
>>>
>>> 5.313866666667E-01 6.667333333333E-02 1.973381999940E-01
>>> 6.667333333333E-02 5.353866666667E-01 -3.338499994017E-03
>>> 1.973381999940E-01 -3.338499994017E-03 5.274399026665E-01
>>>
>>
>> So the claim is that you have some small matrix (the first) and you call
>> MatSetValues() to put it in a big matrix, and you get different values in
>> the big matrix? I am confident that this is impossible. So, in order to
>> verify this, it should be easy to
>>
>> 1) Run in the debugger
>>
>> 2) Stop in MatSetValues()
>>
>> 3) Print out the input array v[] (you should get the first matrix in full
>> precision)
>>
>> 4) You could tace down to where each value is entered, but it might be
>> easier to
>>
>> 5) Call https://urldefense.us/v3/__https://petsc.org/main/manualpages/Mat/MatSeqAIJGetArray/__;!!G_uCfscf7eWS!b2z23LNHOLo_xQtxCTVj6d1lgTRJZI9AkCNs-HnbBq9xRAkVMM8SoPmLIcAoDnGdhsY3KahbO0vXdCFA-uJK$  and
>> look directly at the array right after the call
>> to MatSetValues()
>>
>> Thanks,
>>
>> Matt
>>
>>> These are the same values shown as when calling MatView.
>>> They are all quite off. There is for example a difference of 5e-4 for
>>> entry (0,0).
>>> Just in case I messed up the precision, the closest single-precision
>>> value for entry (0,0) is 0x1.100e6p-1 = 0.53135997...
>>>
>>> ** 2-norm of the assembled matrix times a vector [1,1,1]
>>> 1.2294717692646715
>>>
>>> This is _exactly_ the norm using the matrix from MatView (i.e. the one
>>> from MatGetValues) times [1,1,1].
>>>
>>> The original/assembled matrix times [1,1,1] should be
>>> 1.229421704786441 -- difference of 5e-5
>>>
>>> Am I accessing stored/assembled values the wrong way so that they all
>>> are always off when printed? Or am I missing something else?
>>>
>>> Thanks
>>>
>>>
>>> On Sunday, March 15th, 2026 at 6:16 PM, Matthew Knepley <
>>> knepley at gmail.com> wrote:
>>>
>>> On Sun, Mar 15, 2026 at 12:58 PM Noam T. <dontbugthedevs at proton.me>
>>> wrote:
>>>
>>>> The results above of the SNES output were using the flags
>>>>
>>>> -ksp_type fgmres -pc_type lu
>>>>
>>>
>>> This is very easy to check. Multiply the solution by your matrix and
>>> subtract the RHS. It must satisfy the tolerance you have for the solver. If
>>> not, there is some other problem, but I imagine it does, which is what we
>>> mean by the solution.
>>>
>>> To answer the first question, there is no reduction in precision in
>>> MatSetValues(). I am sure if you carry out the addition by hand you will
>>> see that the result is within 1ulp as it should be from the standard.
>>>
>>> Thanks,
>>>
>>> Matt
>>>
>>>> Thanks
>>>> On Sunday, March 15th, 2026 at 5:12 PM, Matthew Knepley <
>>>> knepley at gmail.com> wrote:
>>>>
>>>> On Sun, Mar 15, 2026 at 11:19 AM Noam T. via petsc-users <
>>>> petsc-users at mcs.anl.gov> wrote:
>>>>
>>>>> A bit more information:
>>>>>
>>>>> Looking at the solution of the system of equations, knowing the exact
>>>>> RHS vector:
>>>>> ---
>>>>> [-0.00197, 0.00203, -0.00197]
>>>>> ---
>>>>>
>>>>> SNES gives the solution:
>>>>> ---
>>>>> 3.316262462871E-03, -4.189244774965E-03, 2.468317119413E-03
>>>>> ---
>>>>>
>>>>> which is indeed (closer to) the solution for the jacobian shown in
>>>>> MatView (the "lower precision" one). The "exact" solution would be:
>>>>> ---
>>>>> 3.282607248309093b-3, -4.241572005990172b-3,, 2.425030835360701b-3
>>>>> ---
>>>>>
>>>>> which is already different form the second decimal digit.
>>>>>
>>>>
>>>> Are you using an exact solution method, like LU?
>>>>
>>>> Thanks,
>>>>
>>>> Matt
>>>>
>>>>>
>>>>> ***
>>>>> Requesting a low precision to have SNES do a few iterations, shows
>>>>> that the built jacobian seems to deviate more and more form the computed
>>>>> one after some iterations, e.g.:
>>>>>
>>>>> Computed (printed from in-code):
>>>>> ---
>>>>> 0.536488494840 0.066474939660 0.198813816284
>>>>> 0.066474939660 0.529482312614 -0.002835445071
>>>>> 0.198813816284 -0.002835445071 0.530589055263
>>>>> ---
>>>>>
>>>>> MatView:
>>>>> ---
>>>>> row 0: (0, 0.538491) (1, 0.0662019) (2, 0.198808)
>>>>> row 1: (0, 0.0662019) (1, 0.526981) (2, -0.00257554)
>>>>> row 2: (0, 0.198808) (1, -0.00257554) (2, 0.529883)
>>>>> ---
>>>>>
>>>>> Only one digit is correct/the same in entries [1,2], [2,1] and [2,2].
>>>>>
>>>>> Thank you.
>>>>> Noam
>>>>>
>>>>>
>>>>> On Sunday, March 15th, 2026 at 3:29 PM, Noam T. <
>>>>> dontbugthedevs at proton.me> wrote:
>>>>>
>>>>> Hello,
>>>>>
>>>>> Looking at the assembly of the Jacobian using MatView, I noticed
>>>>> values were somewhat different from the computed ones in the jacobian
>>>>> function (DMSNESSetJacobian). For the test, I used a single Q1 element,
>>>>> with 4 quadrature points, so the exact Jacobian matrix can be computed
>>>>> analytically.
>>>>>
>>>>> In the jacobian function, after looping through all quadrature points,
>>>>> printing the matrix to stdout shows:
>>>>>
>>>>> ----
>>>>> 0.531360000000 0.066670000000 0.197333333327
>>>>> 0.066670000000 0.535360000000 -0.003333333327
>>>>> 0.197333333327 -0.003333333327 0.527413333333
>>>>> ----
>>>>>
>>>>> (showing only the 3 free DoFs). The 2x2 submatrix in [0:1] has exactly
>>>>> four/five nonzero digits, as shown above. The rest of the elements have a
>>>>> periodic 3 ending (a couple of digits at the end are off, but that's fine).
>>>>>
>>>>> This same matrix is the one added to the global jacobian during the
>>>>> quadrature loop, with "DMPlexMatSetClosure". After looping, calling
>>>>> "MatAssemblyBegin/End", then MatView:
>>>>>
>>>>> ----
>>>>> Mat Object: 1 MPI process
>>>>> type: seqaij
>>>>> row 0: (0, 0.531387) (1, 0.0666733) (2, 0.197338)
>>>>> row 1: (0, 0.0666733) (1, 0.535387) (2, -0.0033385)
>>>>> row 2: (0, 0.197338) (1, -0.0033385) (2, 0.52744)
>>>>> ----
>>>>>
>>>>> Values are close, but definitely not the same as computed. Are these
>>>>> values from "MatView" the ones actually stored in the matrix? It seems
>>>>> there is some precision loss in the process.
>>>>>
>>>>> Interestingly, computing the jacobian through finite differences,
>>>>> shows the same result for the "hand-coded" jacobian, whereas the finite
>>>>> differences ones is the "exact" one (same as first one above):
>>>>>
>>>>> ----
>>>>> ||J - Jfd||_F/||J||_F = 4.90736e-05, ||J - Jfd||_F = 4.74265e-05
>>>>> Hand-coded Jacobian ----------
>>>>> Mat Object: 1 MPI process
>>>>> type: seqaij
>>>>> row 0: (0, 0.531387) (1, 0.0666733) (2, 0.197338)
>>>>> row 1: (0, 0.0666733) (1, 0.535387) (2, -0.0033385)
>>>>> row 2: (0, 0.197338) (1, -0.0033385) (2, 0.52744)
>>>>> Finite difference Jacobian ----------
>>>>> Mat Object: 1 MPI process
>>>>> type: seqaij
>>>>> row 0: (0, 0.53136) (1, 0.06667) (2, 0.197333)
>>>>> row 1: (0, 0.06667) (1, 0.53536) (2, -0.00333333)
>>>>> row 2: (0, 0.197333) (1, -0.00333333) (2, 0.527413)
>>>>> Hand-coded minus finite-difference Jacobian with tolerance 1e-05
>>>>> ----------
>>>>> Mat Object: 1 MPI process
>>>>> type: seqaij
>>>>> row 0: (0, 2.66554e-05)
>>>>> row 1: (1, 2.66554e-05)
>>>>> row 2: (2, 2.6558e-05)
>>>>> ----
>>>>>
>>>>> The main code is in Fortran, using double precision variables, in case
>>>>> it is relevant.
>>>>>
>>>>> Thank you.
>>>>>
>>>>> Noam
>>>>>
>>>>>
>>>>>
>>>>>
>>>>
>>>> --
>>>> What most experimenters take for granted before they begin their
>>>> experiments is infinitely more interesting than any results to which their
>>>> experiments lead.
>>>> -- Norbert Wiener
>>>>
>>>> https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!b2z23LNHOLo_xQtxCTVj6d1lgTRJZI9AkCNs-HnbBq9xRAkVMM8SoPmLIcAoDnGdhsY3KahbO0vXdBg21gLV$ 
>>>> <https://urldefense.us/v3/__http://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!b2z23LNHOLo_xQtxCTVj6d1lgTRJZI9AkCNs-HnbBq9xRAkVMM8SoPmLIcAoDnGdhsY3KahbO0vXdEsL--2W$ >
>>>>
>>>>
>>>>
>>>
>>> --
>>> What most experimenters take for granted before they begin their
>>> experiments is infinitely more interesting than any results to which their
>>> experiments lead.
>>> -- Norbert Wiener
>>>
>>> https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!b2z23LNHOLo_xQtxCTVj6d1lgTRJZI9AkCNs-HnbBq9xRAkVMM8SoPmLIcAoDnGdhsY3KahbO0vXdBg21gLV$ 
>>> <https://urldefense.us/v3/__http://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!b2z23LNHOLo_xQtxCTVj6d1lgTRJZI9AkCNs-HnbBq9xRAkVMM8SoPmLIcAoDnGdhsY3KahbO0vXdEsL--2W$ >
>>>
>>>
>>>
>>
>> --
>> What most experimenters take for granted before they begin their
>> experiments is infinitely more interesting than any results to which their
>> experiments lead.
>> -- Norbert Wiener
>>
>> https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!b2z23LNHOLo_xQtxCTVj6d1lgTRJZI9AkCNs-HnbBq9xRAkVMM8SoPmLIcAoDnGdhsY3KahbO0vXdBg21gLV$ 
>> <https://urldefense.us/v3/__http://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!b2z23LNHOLo_xQtxCTVj6d1lgTRJZI9AkCNs-HnbBq9xRAkVMM8SoPmLIcAoDnGdhsY3KahbO0vXdEsL--2W$ >
>>
>>
>>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
> https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!b2z23LNHOLo_xQtxCTVj6d1lgTRJZI9AkCNs-HnbBq9xRAkVMM8SoPmLIcAoDnGdhsY3KahbO0vXdBg21gLV$ 
> <https://urldefense.us/v3/__http://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!b2z23LNHOLo_xQtxCTVj6d1lgTRJZI9AkCNs-HnbBq9xRAkVMM8SoPmLIcAoDnGdhsY3KahbO0vXdEsL--2W$ >
>
>
>

-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!b2z23LNHOLo_xQtxCTVj6d1lgTRJZI9AkCNs-HnbBq9xRAkVMM8SoPmLIcAoDnGdhsY3KahbO0vXdBg21gLV$  <https://urldefense.us/v3/__http://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!b2z23LNHOLo_xQtxCTVj6d1lgTRJZI9AkCNs-HnbBq9xRAkVMM8SoPmLIcAoDnGdhsY3KahbO0vXdEsL--2W$ >
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20260325/6ab86ae5/attachment-0001.html>


More information about the petsc-users mailing list