General matrix interative solver

Julian julvar at tamu.edu
Wed Oct 11 09:55:25 CDT 2006


Hello,

I implemented the iterative sparse matrix solver in PetSc into my FEM code
recently. I compared the results from a problem with 1317 unknowns. I used a
direct solver to obtain the reference solution. I have another in-house
sparse iterative solver that I have been using so far. It was written by
someone else but I have access to the source for that solver.
I find the 'error norm' in the solution by taking the square root of the sum
of the squares of the absolute differences between the solution from the
direct solver and the iterative solver. I am ignoring the numerical zeros in
the solutions when doing this.
I find that in order to get same order of the error norm (1e-13) as the
in-house iterative solver, the petsc solver takes a much longer time and
larger number of iterations. While the inhouse solver took less than one
second, the petsc solver took 13 seconds. The inhouse solver took 476
iterations whereas the petsc solver took 4738 iterations.
I'm guessing this has to do with different setting of the solver in petsc
such as the preconditioner etc. 
Can you tell me what the different settings are? And how to tweak them so
that I can atleast get as good as a performance as the inhouse code ?
Given below is how I have implemented the petsc solver:

/////initialization
    PetscInitializeNoArguments();
        Assert( mat = (double*)malloc(sizeof(Mat)) );
    MatCreateSeqAIJ(PETSC_COMM_SELF, L, L,
      PETSC_DEFAULT, PETSC_NULL, (Mat*)mat);

////// this is the function I use to populate the matrix
MatSetValue(*(Mat*)mat, ii, jj, value, ADD_VALUES);

////// this is how I actaully solve the matrix
  MatAssemblyBegin(*(Mat*)mat, MAT_FINAL_ASSEMBLY);
  MatAssemblyEnd(*(Mat*)mat, MAT_FINAL_ASSEMBLY);

  double iter_error = 1e-10;
  int max_iter_num = 10000;
  int num_of_iter;

        Vec rhs, x;
    VecCreateSeqWithArray(PETSC_COMM_SELF, L, b, &rhs);
    VecDuplicate(rhs, &x);

    KSP ksp;
    KSPCreate(PETSC_COMM_SELF, &ksp);
    KSPSetTolerances(ksp, iter_error, PETSC_DEFAULT,
      PETSC_DEFAULT, max_iter_num);
    KSPSetFromOptions(ksp);
    KSPSetOperators(ksp, *(Mat*)mat, *(Mat*)mat,
      SAME_PRECONDITIONER);
    KSPSolve(ksp,rhs,x);

    PetscReal r_norm;
    KSPGetResidualNorm(ksp, &r_norm);
    KSPGetIterationNumber(ksp, &num_of_iter);

        cout << "max_iter_num\t" << max_iter_num << endl;
        cout << "iter_error\t" << iter_error << endl;

        cout << "Matrix solver step " << num_of_iter << ", residual " <<
r_norm << ".\n";

        PetscScalar *p;
    VecGetArray(x, &p);
    for(int i=0; i<L; i++) {
        b[i] = p[i];
    }
    VecRestoreArray(x, &p);

    KSPDestroy(ksp);
    VecDestroy(rhs);
    VecDestroy(x);

cout <<"Iterations for convergence="<< num_of_iter << " - Residual Norm = "
<< r_norm << endl; 



If this is not the typical method to be used to solve this kind of problem,
please let me know what functions I should use. 
I should mention that the inhouse code is for symmetric matrices and from
what I understand, the petsc solver works for general unsymmetric matrices.
But I think for iterative solvers, it should still give around the same
performance.
I tested the solvers against some other problems as well, and I got the same
performance.. In some cases, no matter how many iterations it goes through,
the petsc solver would not go below a certain error norm whereas the inhouse
solver would get almost exactly the same answer as the direct solver
solution. I'm thinking the petsc solver should be able to solve this problem
just as easily. It would be great if anyone could help me figure out the
appropriate settings I should use in the petsc solver. 

Thanks,
Julian.




More information about the petsc-users mailing list