Increasing convergence rate
jed at 59A2.org
Fri Jan 23 11:53:14 CST 2009
On Thu, Jan 22, 2009 at 20:59, jerome ho <jerome.snho at gmail.com> wrote:
> I'm getting strange results. In parallel (on 2 processors), the result
> doesn't to be able to converge further but appears to fluctuate
> between 1e-9 and 1e-8 (after 100+ iterations), when it solves in 8
> iterations on a single machine. I decrease the rtol (from 1e-7) for
> the parallel simulation because I'm getting a 20% result difference.
The 20% result difference makes me very worried that the matrices are
Are you still using BoomerAMG? If your 1Mx1M matrix comes from a 2D
problem you might be able to compare with a direct solve (-pc_type lu
-pc_factor_mat_solver_package mumps) but if it's 3D, that would take
way too much memory. It's a good idea to make the problem as small as
possible (like 100x100 or less) when dealing with issues of
correctness. It's really hard to make a preconditioner exactly the
same in parallel, even parallel ILU (like Euclid with default options)
is not exactly the same. It's silly, but if you can't make the
problem smaller, can't use a direct solver, and don't have an easy way
to determine if the parallel matrix is the same as the serial one, try
-pc_type redundant -pc_redundant_type hypre, the results (up to
rounding error due to non-associativity) and number of iterations
should be the same as in serial but the monitored residuals won't be
exactly the same since they are computed differently.
> When I split into more (6) processors, it's reporting divergence. Am I
> doing something wrong here? Should I be switching to DMMG method
> instead? The matrix size is about 1mil x 1mil.
If you are using a structured grid then geometric multigrid (DMMG)
should reduce setup time compared to AMG, but AMG might be more
robust. That's not the issue here so I wouldn't bother until you get
correct results in parallel.
More information about the petsc-users