[petsc-users] [WARNING: UNSCANNABLE EXTRACTION FAILED]GMRES plus BlockJacobi behave differently for seemlingy identical matrices
Pierre Jolivet
pierre at joliv.et
Fri Oct 24 07:51:42 CDT 2025
> On 24 Oct 2025, at 1:52 PM, Nils Schween <nils.schween at mpi-hd.mpg.de> wrote:
>
> Dear PETSc users, Dear PETSc developers,
>
> in our software we are solving a linear system with PETSc using GMRES
> in conjunction with a BlockJacobi preconditioner, i.e. the default of
> the KSP object.
>
> We have two versions of the system matrix, say A and B. The difference
> between them is the non-zero pattern. The non-zero pattern of matrix B
> is a subset of the one of matrix A. Their values should be identical.
>
> We solve the linear system, using A yields a solution after some
> iterations, whereas using B does not converge.
>
> I created binary files of the two matrices, the right-hand side, and
> wrote a small PETSc programm, which loads them and demonstrates the
> issue. I attach the files to this email.
>
> We would like to understand why the solver-preconditioner combination
> works in case A and not in case B. Can you help us finding this out?
>
> To test if the two matrices are identical, I substracted them and
> computed the Frobenius norm of the result. It is zero.
The default subdomain solver is ILU(0).
By definition, this won’t allow fill-in.
So when you are not storing the zeros in B, the quality of your PC is much worse.
You can check this yourself with -A_ksp_view -B_ksp_view:
[…]
0 levels of fill
tolerance for zero pivot 2.22045e-14
matrix ordering: natural
factor fill ratio given 1., needed 1.
Factored matrix follows:
Mat Object: (A_) 1 MPI process
type: seqaij
rows=1664, cols=1664
package used to perform factorization: petsc
total: nonzeros=117760, allocated nonzeros=117760
using I-node routines: found 416 nodes, limit used is 5
[…]
0 levels of fill
tolerance for zero pivot 2.22045e-14
matrix ordering: natural
factor fill ratio given 1., needed 1.
Factored matrix follows:
Mat Object: (B_) 1 MPI process
type: seqaij
rows=1664, cols=1664
package used to perform factorization: petsc
total: nonzeros=49408, allocated nonzeros=49408
not using I-node routines
Check the number of nonzeros of both factored Mat.
With -B_pc_factor_levels 3, you’ll get roughly similar convergence speed (and density in the factored Mat of both PC).
Thanks,
Pierre
>
> To give you more context, we solve a system of partial differential
> equations that models astrophysical plasmas. It is essentially a system
> of advection-reaction equations. We use a discontinuous Galerkin (dG)
> method. Our code relies on the finite element library library deal.ii
> and its PETSc interface. The system matrices A and B are the result of
> the (dG) discretisation. We GMRES with a BlockJaboci preconditioner,
> because we do not know any better.
>
> I tested the code I sent with PETSc 3.24.0 and 3.19.1 on my workstation, i.e.
> Linux home-desktop 6.17.2-arch1-1 #1 SMP PREEMPT_DYNAMIC Sun, 12 Oct 2025 12:45:18 +0000 x86_64 GNU/Linux
> I use OpenMPI 5.0.8 and I compiled with mpicc, which in my cases use
> gcc.
>
> In case you need more information. Please let me know.
> Any help is appreciated.
>
> Thank you,
> Nils
> <example.tar.gz>
>
> --
> Nils Schween
>
> Phone: +49 6221 516 557
> Mail: nils.schween at mpi-hd.mpg.de
> PGP-Key: 4DD3DCC0532EE96DB0C1F8B5368DBFA14CB81849
>
> Max Planck Institute for Nuclear Physics
> Astrophysical Plasma Theory (APT)
> Saupfercheckweg 1, D-69117 Heidelberg
> https://urldefense.us/v3/__https://www.mpi-hd.mpg.de/mpi/en/research/scientific-divisions-and-groups/independent-research-groups/apt__;!!G_uCfscf7eWS!cyqFGbgowHr6gm-QDawC0b1a6AhpawtUN-FpKiTM6tdOAlamcWYCbhBLhCmS0uCfuDsumQLI95B77FQhrygieQ$
More information about the petsc-users
mailing list