Increasing convergence rate

jerome ho jerome.snho at
Wed Jan 28 17:31:40 CST 2009

On Sat, Jan 24, 2009 at 1:53 AM, Jed Brown <jed at> wrote:
> The 20% result difference makes me very worried that the matrices are
> actually different.

> Are you still using BoomerAMG?  If your 1Mx1M matrix comes from a 2D
> problem you might be able to compare with a direct solve (-pc_type lu
> -pc_factor_mat_solver_package mumps) but if it's 3D, that would take
> way too much memory.  It's a good idea to make the problem as small as
> possible (like 100x100 or less) when dealing with issues of
> correctness.  It's really hard to make a preconditioner exactly the
> same in parallel, even parallel ILU (like Euclid with default options)
> is not exactly the same.   It's silly, but if you can't make the
> problem smaller, can't use a direct solver, and don't have an easy way
> to determine if the parallel matrix is the same as the serial one, try
> -pc_type redundant -pc_redundant_type hypre, the results (up to
> rounding error due to non-associativity) and number of iterations
> should be the same as in serial but the monitored residuals won't be
> exactly the same since they are computed differently.

Thanks for your advice. I finally managed to nail down the problem.
Earlier, on smaller test cases, the matrices on both serial and
parallel was verified to be the same.
I didn't thought it was the matrices. But when I tried the redundant
method I still got the 20% difference.

So, I recheck the matrix stamping again and there were a few elements
that I missed when distributed into more processors, which makes it
even harder to converge.
Now, both the serial and parallel results correlates and converges
within several iterations. Thanks again!


More information about the petsc-users mailing list