[petsc-users] [WARNING: UNSCANNABLE EXTRACTION FAILED]GMRES plus BlockJacobi behave differently for seemlingy identical matrices

Pierre Jolivet pierre at joliv.et
Wed Nov 26 23:26:42 CST 2025



> On 26 Nov 2025, at 10:26 PM, Randall Mackie <rlmackie862 at gmail.com> wrote:
> 
> Hi Pierre,
> 
> Following up on this email, what are good options to use when trying PCHPDDM for the first time?

There is no single answer to this question, especially when solving systems in a purely algebraic manner.
Some examples can be found in the repository:
$ git grep " hpddm" "*/ex*"

> For example, what did you use here that worked so well?

What worked well here may totally fail for other problems, so unless you are solving the same problem, then the following may not work, but here is what I get with -options_view.

Thanks,
Pierre

  Linear A_ solve converged due to CONVERGED_RTOL iterations 5
  Linear B_ solve converged due to CONVERGED_RTOL iterations 39
#PETSc Option Table entries:
-A_ksp_converged_reason # (source: command line)
-A_ksp_type fgmres # (source: command line)
-A_pc_hpddm_coarse_mat_type baij # (source: command line)
-A_pc_hpddm_harmonic_overlap 2 # (source: command line)
-A_pc_hpddm_levels_1_sub_pc_type lu # (source: command line)
-A_pc_hpddm_levels_1_svd_nsv 120 # (source: command line)
-A_pc_type hpddm # (source: command line)
-B_ksp_converged_reason # (source: command line)
-B_ksp_type fgmres # (source: command line)
-B_pc_hpddm_coarse_mat_type baij # (source: command line)
-B_pc_hpddm_harmonic_overlap 2 # (source: command line)
-B_pc_hpddm_levels_1_sub_pc_type lu # (source: command line)
-B_pc_hpddm_levels_1_svd_nsv 120 # (source: command line)
-B_pc_type hpddm # (source: command line)
-matload_block_size 1 # (source: file)
-options_view # (source: command line)
-vecload_block_size 1 # (source: file)
#End of PETSc Option Table entries

> Thanks,
> 
> Randy M.
> 
> 
> 
>> On Oct 24, 2025, at 8:14 AM, Pierre Jolivet <pierre at joliv.et> wrote:
>> 
>> 
>>> On 24 Oct 2025, at 4:38 PM, Nils Schween <nils.schween at mpi-hd.mpg.de> wrote:
>>> 
>>> Thank you very much Pierre!
>>> 
>>> I was not aware of the fact that the fill-in in the ILU decides about
>>> its quality. But its clear now. I will just test what level of fill we
>>> need for our application.
>> 
>> I’ll note that block Jacobi and ILU are not very efficient solvers in most instances.
>> I tried much fancier algebraic preconditioners such as BoomerAMG and GAMG on your problem, and they are failing hard out-of-the-box.
>> Without knowing much more on the problem, it’s difficult to setup.
>> We also have other more robust preconditioners in PETSc by means of domain decomposition methods.
>> deal.II is interfaced with PCBDDC (which is also somewhat difficult to tune in a fully algebraic mode) and you could also use PCHPDDM (in fully algebraic mode).
>> On this toy problem, PCHPDDM performs much better in terms of iteration than the simpler PCBJACOBI + (sub) PCILU.
>> Of course, as we always advise our users, it’s best to do a little bit of literature survey to find the best method for your application, I doubt it’s PCBJACOBI.
>> If the solver part is not a problem in your application, just carry on with what’s easiest for you.
>> If you want some precise help on either PCBDDC or PCHPDDM, feel free to get in touch with me in private.
>> 
>> Thanks,
>> Pierre
>> 
>> PCGAMG
>> Linear A_ solve did not converge due to DIVERGED_ITS iterations 1000
>> Linear B_ solve did not converge due to DIVERGED_ITS iterations 1000
>> PCHYPRE
>> Linear A_ solve did not converge due to DIVERGED_NANORINF iterations 0
>> Linear B_ solve did not converge due to DIVERGED_NANORINF iterations 0
>> PCHPDDM
>> Linear A_ solve converged due to CONVERGED_RTOL iterations 4
>> Linear B_ solve converged due to CONVERGED_RTOL iterations 38
>> PCBJACOBI
>> Linear A_ solve converged due to CONVERGED_RTOL iterations 134
>> Linear B_ solve did not converge due to DIVERGED_ITS iterations 1000
>> 
>>> Once more thanks,
>>> Nils
>>> 
>>> 
>>> Pierre Jolivet <pierre at joliv.et> writes:
>>> 
>>>>> On 24 Oct 2025, at 1:52 PM, Nils Schween <nils.schween at mpi-hd.mpg.de> wrote:
>>>>> 
>>>>> Dear PETSc users, Dear PETSc developers,
>>>>> 
>>>>> in our software we are solving a linear system with PETSc using GMRES
>>>>> in conjunction with a BlockJacobi preconditioner, i.e. the default of
>>>>> the KSP object.
>>>>> 
>>>>> We have two versions of the system matrix, say A and B. The difference
>>>>> between them is the non-zero pattern. The non-zero pattern of matrix B
>>>>> is a subset of the one of matrix A. Their values should be identical.
>>>>> 
>>>>> We solve the linear system, using A yields a solution after some
>>>>> iterations, whereas using B does not converge.
>>>>> 
>>>>> I created binary files of the two matrices, the right-hand side, and
>>>>> wrote a small PETSc programm, which loads them and demonstrates the
>>>>> issue. I attach the files to this email.
>>>>> 
>>>>> We would like to understand why the solver-preconditioner combination
>>>>> works in case A and not in case B. Can you help us finding this out?
>>>>> 
>>>>> To test if the two matrices are identical, I substracted them and
>>>>> computed the Frobenius norm of the result. It is zero.
>>>> 
>>>> The default subdomain solver is ILU(0).
>>>> By definition, this won’t allow fill-in.
>>>> So when you are not storing the zeros in B, the quality of your PC is much worse.
>>>> You can check this yourself with -A_ksp_view -B_ksp_view:
>>>> […]
>>>>  0 levels of fill
>>>>  tolerance for zero pivot 2.22045e-14
>>>>  matrix ordering: natural
>>>>  factor fill ratio given 1., needed 1.
>>>>    Factored matrix follows:
>>>>      Mat Object: (A_) 1 MPI process
>>>>        type: seqaij
>>>>        rows=1664, cols=1664
>>>>        package used to perform factorization: petsc
>>>>        total: nonzeros=117760, allocated nonzeros=117760
>>>>          using I-node routines: found 416 nodes, limit used is 5
>>>> […]
>>>>  0 levels of fill
>>>>  tolerance for zero pivot 2.22045e-14
>>>>  matrix ordering: natural
>>>>  factor fill ratio given 1., needed 1.
>>>>    Factored matrix follows:
>>>>      Mat Object: (B_) 1 MPI process
>>>>        type: seqaij
>>>>        rows=1664, cols=1664
>>>>        package used to perform factorization: petsc
>>>>        total: nonzeros=49408, allocated nonzeros=49408
>>>>          not using I-node routines
>>>> 
>>>> Check the number of nonzeros of both factored Mat.
>>>> With -B_pc_factor_levels 3, you’ll get roughly similar convergence speed (and density in the factored Mat of both PC).
>>>> 
>>>> Thanks,
>>>> Pierre
>>>> 
>>>>> 
>>>>> To give you more context, we solve a system of partial differential
>>>>> equations that models astrophysical plasmas. It is essentially a system
>>>>> of advection-reaction equations. We use a discontinuous Galerkin (dG)
>>>>> method. Our code relies on the finite element library library deal.ii
>>>>> and its PETSc interface. The system matrices A and B are the result of
>>>>> the (dG) discretisation. We GMRES with a BlockJaboci preconditioner,
>>>>> because we do not know any better.
>>>>> 
>>>>> I tested the code I sent with PETSc 3.24.0 and 3.19.1 on my workstation, i.e.
>>>>> Linux home-desktop 6.17.2-arch1-1 #1 SMP PREEMPT_DYNAMIC Sun, 12 Oct 2025 12:45:18 +0000 x86_64 GNU/Linux
>>>>> I use OpenMPI 5.0.8 and I compiled with mpicc, which in my cases use
>>>>> gcc.
>>>>> 
>>>>> In case you need more information. Please let me know.
>>>>> Any help is appreciated.
>>>>> 
>>>>> Thank you,
>>>>> Nils 
>>>>> <example.tar.gz>
>>>>> 
>>>>> -- 
>>>>> Nils Schween
>>>>> 
>>>>> Phone: +49 6221 516 557
>>>>> Mail: nils.schween at mpi-hd.mpg.de
>>>>> PGP-Key: 4DD3DCC0532EE96DB0C1F8B5368DBFA14CB81849
>>>>> 
>>>>> Max Planck Institute for Nuclear Physics
>>>>> Astrophysical Plasma Theory (APT)
>>>>> Saupfercheckweg 1, D-69117 Heidelberg
>>>>> https://urldefense.us/v3/__https://www.mpi-hd.mpg.de/mpi/en/research/scientific-divisions-and-groups/independent-research-groups/apt__;!!G_uCfscf7eWS!YoOlZjX4v-hbz0Oaawvh2Yy3nCbpHcafn1VjON06Or7f-WVrzGzD9SMcky5YJAyzVu62BIfzC5cpshSkkkpKpA$
>>> 
>>> -- 
>>> Nils Schween
>>> PhD Student
>>> 
>>> Phone: +49 6221 516 557
>>> Mail: nils.schween at mpi-hd.mpg.de
>>> PGP-Key: 4DD3DCC0532EE96DB0C1F8B5368DBFA14CB81849
>>> 
>>> Max Planck Institute for Nuclear Physics
>>> Astrophysical Plasma Theory (APT)
>>> Saupfercheckweg 1, D-69117 Heidelberg
>>> https://urldefense.us/v3/__https://www.mpi-hd.mpg.de/mpi/en/research/scientific-divisions-and-groups/independent-research-groups/apt__;!!G_uCfscf7eWS!YoOlZjX4v-hbz0Oaawvh2Yy3nCbpHcafn1VjON06Or7f-WVrzGzD9SMcky5YJAyzVu62BIfzC5cpshSkkkpKpA$
>> 
> 



More information about the petsc-users mailing list