General matrix interative solver
Julian
julvar at tamu.edu
Wed Oct 11 11:40:21 CDT 2006
You are right.. This is a very small problem... I eventually plan on using
it for very large problems.
But for now, I wanted to confirm that I am using the right settings and I
get an accurate enough solution by comparing it with the direct solver
solution.
Currently, with the way I have integrated it with my fea code, I am unable
to send the command line arguments to the solver. Is there some easy way to
send the arguments to the solver during command time?
Julian.
> -----Original Message-----
> From: owner-petsc-users at mcs.anl.gov
> [mailto:owner-petsc-users at mcs.anl.gov] On Behalf Of Barry Smith
> Sent: Wednesday, October 11, 2006 10:22 AM
> To: petsc-users at mcs.anl.gov
> Subject: Re: General matrix interative solver
>
>
> Julian,
>
> If your matrix is symmetric, positive definite then you
> should ALWAYS use -ksp_type cg to use the conjugate gradient
> method instead of the more expensive (and unneeded) GMRES.
>
> Next, in testing you can/should run with -ksp_view to have
> the exact solver options printed out to the screen.
>
> Running with -help will have it print the current solver options.
>
> Run with -ksp_converged_reason to see if the solver
> converged or if it failed and why.
>
> Next you need to determine what the inhouse solver is doing
> and try to set PETSc to use the same solver.
>
> This problem is very, very small for an iterative solver;
> recommend iterative solvers only when working with, say
> 20,000+ unknowns (depends on matrix conditioning, number of
> nonzero etcs also). For small problems direct solver is the
> way to go. You can use the PETSc direct solver with -ksp_type
> preonly -pc_type lu (or for symmetric positive definite use
> -pc_type cholesky).
>
> Let us know what progress you make, it is imperative you
> determine what the in house solver is using.
>
> Barry
>
>
>
> On Wed, 11 Oct 2006, Julian wrote:
>
> > Hello,
> >
> > I implemented the iterative sparse matrix solver in PetSc
> into my FEM
> > code recently. I compared the results from a problem with 1317
> > unknowns. I used a direct solver to obtain the reference
> solution. I
> > have another in-house sparse iterative solver that I have
> been using
> > so far. It was written by someone else but I have access to
> the source for that solver.
> > I find the 'error norm' in the solution by taking the
> square root of
> > the sum of the squares of the absolute differences between the
> > solution from the direct solver and the iterative solver. I am
> > ignoring the numerical zeros in the solutions when doing this.
> > I find that in order to get same order of the error norm (1e-13) as
> > the in-house iterative solver, the petsc solver takes a much longer
> > time and larger number of iterations. While the inhouse solver took
> > less than one second, the petsc solver took 13 seconds. The inhouse
> > solver took 476 iterations whereas the petsc solver took
> 4738 iterations.
> > I'm guessing this has to do with different setting of the solver in
> > petsc such as the preconditioner etc.
> > Can you tell me what the different settings are? And how to
> tweak them
> > so that I can atleast get as good as a performance as the
> inhouse code ?
> > Given below is how I have implemented the petsc solver:
> >
> > /////initialization
> > PetscInitializeNoArguments();
> > Assert( mat = (double*)malloc(sizeof(Mat)) );
> > MatCreateSeqAIJ(PETSC_COMM_SELF, L, L,
> > PETSC_DEFAULT, PETSC_NULL, (Mat*)mat);
> >
> > ////// this is the function I use to populate the matrix
> > MatSetValue(*(Mat*)mat, ii, jj, value, ADD_VALUES);
> >
> > ////// this is how I actaully solve the matrix
> > MatAssemblyBegin(*(Mat*)mat, MAT_FINAL_ASSEMBLY);
> > MatAssemblyEnd(*(Mat*)mat, MAT_FINAL_ASSEMBLY);
> >
> > double iter_error = 1e-10;
> > int max_iter_num = 10000;
> > int num_of_iter;
> >
> > Vec rhs, x;
> > VecCreateSeqWithArray(PETSC_COMM_SELF, L, b, &rhs);
> > VecDuplicate(rhs, &x);
> >
> > KSP ksp;
> > KSPCreate(PETSC_COMM_SELF, &ksp);
> > KSPSetTolerances(ksp, iter_error, PETSC_DEFAULT,
> > PETSC_DEFAULT, max_iter_num);
> > KSPSetFromOptions(ksp);
> > KSPSetOperators(ksp, *(Mat*)mat, *(Mat*)mat,
> > SAME_PRECONDITIONER);
> > KSPSolve(ksp,rhs,x);
> >
> > PetscReal r_norm;
> > KSPGetResidualNorm(ksp, &r_norm);
> > KSPGetIterationNumber(ksp, &num_of_iter);
> >
> > cout << "max_iter_num\t" << max_iter_num << endl;
> > cout << "iter_error\t" << iter_error << endl;
> >
> > cout << "Matrix solver step " << num_of_iter << ",
> residual "
> > << r_norm << ".\n";
> >
> > PetscScalar *p;
> > VecGetArray(x, &p);
> > for(int i=0; i<L; i++) {
> > b[i] = p[i];
> > }
> > VecRestoreArray(x, &p);
> >
> > KSPDestroy(ksp);
> > VecDestroy(rhs);
> > VecDestroy(x);
> >
> > cout <<"Iterations for convergence="<< num_of_iter << " -
> Residual Norm = "
> > << r_norm << endl;
> >
> >
> >
> > If this is not the typical method to be used to solve this kind of
> > problem, please let me know what functions I should use.
> > I should mention that the inhouse code is for symmetric
> matrices and
> > from what I understand, the petsc solver works for general
> unsymmetric matrices.
> > But I think for iterative solvers, it should still give around the
> > same performance.
> > I tested the solvers against some other problems as well, and I got
> > the same performance.. In some cases, no matter how many
> iterations it
> > goes through, the petsc solver would not go below a certain
> error norm
> > whereas the inhouse solver would get almost exactly the
> same answer as
> > the direct solver solution. I'm thinking the petsc solver should be
> > able to solve this problem just as easily. It would be
> great if anyone
> > could help me figure out the appropriate settings I should
> use in the petsc solver.
> >
> > Thanks,
> > Julian.
> >
> >
>
>
More information about the petsc-users
mailing list