# [petsc-users] Solving NON-Diagonally dominant sparse system

Peetz, Darin T peetz2 at illinois.edu
Tue Apr 11 07:59:26 CDT 2017

```Did you call KSPSetInitialGuessNonzero() or use the option -ksp_initial_guess_nonzero?  Otherwise I think Petsc zeroes out your initial guess when you call KSPSolve().

________________________________
From: petsc-users-bounces at mcs.anl.gov [petsc-users-bounces at mcs.anl.gov] on behalf of Kaushik Kulkarni [kaushikggg at gmail.com]
Sent: Tuesday, April 11, 2017 7:07 AM
To: Dave May
Cc: PETSc users list
Subject: Re: [petsc-users] Solving NON-Diagonally dominant sparse system

But anyway since I am starting off with the exact solution itself, shouldn't the norm should be zero independent of the conditioning?

On Tue, Apr 11, 2017 at 11:57 AM, Dave May <dave.mayhem23 at gmail.com<mailto:dave.mayhem23 at gmail.com>> wrote:

On Tue, 11 Apr 2017 at 07:28, Kaushik Kulkarni <kaushikggg at gmail.com<mailto:kaushikggg at gmail.com>> wrote:
A strange behavior I am observing is:
Problem: I have to solve A*x=rhs, and currently I am currently trying to solve for a system where I know the exact solution. I have initialized the exact solution in the Vec x_exact.

MatMult(A, x_exact, dummy);// Storing the value of A*x_exact in dummy
VecAXPY(dummy, -1.0, rhs); // dummy = dummy -rhs
VecNorm(dummy, NORM_INFINITY, &norm_val); // norm_val = ||dummy||, which gives us the residual norm
PetscPrintf(PETSC_COMM_SELF, "Norm = %f\n", norm_val); // Printing the norm.

// Starting with the linear solver
KSPCreate(PETSC_COMM_SELF, &ksp);
KSPSetOperators(ksp, A, A);
KSPSetFromOptions(ksp);
KSPSolve(ksp,rhs,x_exact); // Solving the system A*x= rhs, with the given initial input x_exact. So the result will also be stored in x_exact

On running with -pc_type lu -pc_factor_mat_solver_package superlu -ksp_monitor I get the following output:
Norm = 0.000000
0 KSP Residual norm 4.371606462669e+04
1 KSP Residual norm 5.850058113796e+02
2 KSP Residual norm 5.832677911508e+02
3 KSP Residual norm 1.987386549571e+02
4 KSP Residual norm 1.220006530614e+02
.
.
.

The default KSP is left preconditioned GMRES. Hence the above iterates report the preconditioned residual. If your operator is singular, and LU generated garbage, the preconditioned residual can be very different to the true residual.

To see the true residual, use
-ksp_monitor_true_residual

Alternatively, use a right preconditioned KSP method, e.g.
-ksp_type fgmres
(or -ksp_type gcr)
With these methods, you will see the true residual with just -ksp_monitor

Thanks
Dave

Since the initial guess is the exact solution should'nt the first residual itself be zero and converge in one iteration.

Thanks,
Kaushik

On Tue, Apr 11, 2017 at 10:08 AM, Kaushik Kulkarni <kaushikggg at gmail.com<mailto:kaushikggg at gmail.com>> wrote:
Thank you for the inputs.
I tried Barry' s suggestion to use SuperLU, but the solution does not converge and on doing -ksp_monitor -ksp_converged_reason. I get the following error:-
240 KSP Residual norm 1.722571678777e+07
Linear solve did not converge due to DIVERGED_DTOL iterations 240
For some reason it is diverging, although I am sure that for the given system a unique solution exists.

Thanks,
Kaushik

On Tue, Apr 11, 2017 at 1:04 AM, Xiaoye S. Li <xsli at lbl.gov<mailto:xsli at lbl.gov>> wrote:
If you need to use SuperLU_DIST, the pivoting is done statically, using maximum weighted matching, so the small diagonals are usually taken care as well. It is not as good as partial pivoting, but works most of the time.

Sherry

On Mon, Apr 10, 2017 at 12:07 PM, Barry Smith <bsmith at mcs.anl.gov<mailto:bsmith at mcs.anl.gov>> wrote:

I would suggest using ./configure --download-superlu and then when running the program -pc_type lu -pc_factor_mat_solver_package superlu

Note that this is SuperLU, it is not SuperLU_DIST.  Superlu uses partial pivoting for numerical stability so should be able to handle the small or zero diagonal entries.

Barry

> On Apr 10, 2017, at 1:17 PM, Kaushik Kulkarni <kaushikggg at gmail.com<mailto:kaushikggg at gmail.com>> wrote:
>
> Hello,
> I am trying to solve a 2500x2500 sparse matrix. To get an idea about the matrix structure I have added a file matrix.log which contains the output of MatView() and also the output of Matview_draw in the image file.
>
> From the matrix structure it can be seen that Jacobi iteration won't work and some of the diagonal entries being very low(of the order of 1E-16) LU factorization would also fail.
>
> C​an someone please suggest what all could I try next, in order to make the solution converge?
>
> Thanks,
> Kaushik
> ​
> --
> Kaushik Kulkarni
> Department of Mechanical Engineering
> Indian Institute of Technology Bombay
> Mumbai, India
> +91-9967687150<tel:%2B91-9967687150>
> <matrix.log><matrix_pattern.pn<http://matrix_pattern.pn>g>

--
Kaushik Kulkarni
Department of Mechanical Engineering
Indian Institute of Technology Bombay
Mumbai, India
+91-9967687150

--
Kaushik Kulkarni
Department of Mechanical Engineering
Indian Institute of Technology Bombay
Mumbai, India
+91-9967687150

--
Kaushik Kulkarni