Poor performance with BoomerAMG?

Barry Smith bsmith at mcs.anl.gov
Sat Feb 16 11:49:04 CST 2008


    All multigrid solvers depend on proper scaling of the variables.  
For example
for a  Laplacian operator the matrix entries are

         \integral \grad \phi_i dot \grad \phi_j

now in 2d \grad \phi is O(1/h) and the volume is O(h^2) so the terms
in the matrix are O(1). In 3d \grad \phi is still O(1/h) but the  
volume is O(h^3)
meaning the matrix entries are O(h).  Now say you impose a Dirichlet  
boundary
conditions by just saying u_k    =  g_k. In 2d this is ok but in 3d  
you need to
use h*u_k = h*g_k otherwise when you restrict to the coarser grid the
resulting matrix entries for the boundary are "out of whack" with the  
matrix
entries for the interior of the domain.

Actually most preconditioners and Krylov methods behavior does depend
on the row scaling; multigrid is just particularly sensitive.

    Barry


On Feb 15, 2008, at 5:36 PM, Andrew T Barker wrote:

>
>
>> Be careful how you handle boundary conditions; you need to make sure
>> they have the same scaling as the other equations.
>
> Could you clarify what you mean?  Is boomerAMG sensitive to scaling  
> of matrix rows in a way that other solvers/preconditioners are not?
>
> Andrew
>
>>
>> On Feb 15, 2008, at 8:36 AM, knutert at stud.ntnu.no wrote:
>>
>>> Hi Ben,
>>>
>>> Thank you for answering. With gmres and boomeramg I get a run time  
>>> of
>>> 2s, so that is much better. However, if I increase the grid size to
>>> 513x513, I get a run time of one minute. With richardson, it fails
>>> to converge.
>>> LU gives 6 seconds, CG and ICC gives 7s, and the DMMG solver 3s for
>>> the 513x513 problem.
>>>
>>> When using the DMMG framework, I just used the default solvers.
>>> I use the Galerkin process to generate the coarse matrices for
>>> the multigrid cycle.
>>>
>>> Best,
>>> Knut
>>>
>>> Siterer Ben Tay <zonexo at gmail.com>:
>>>
>>>> Hi Knut,
>>>>
>>>> I'm currently using boomeramg to solve my poisson eqn too. I'm
>>>> using it
>>>> on my structured C-grid. I found it to be faster than LU,
>>>> especially as
>>>> the grid size increases. However I use it as a preconditioner with
>>>> GMRES as the solver. Have you tried this option? Although it's
>>>> faster,
>>>> the speed increase is usually less than double. It seems to be
>>>> worse if
>>>> there is a lot of stretching in the grid.
>>>>
>>>> Btw, your mention using the DMMG framework and it takes less than a
>>>> sec. What solver or preconditioner did you use? It's 4 times faster
>>>> than GMRES...
>>>>
>>>> thanks!
>>>>
>>>> knutert at stud.ntnu.no wrote:
>>>>> Hello,
>>>>>
>>>>> I am trying to use the hypre multigrid solver to solve a Poisson
>>>>> equation.
>>>>> However, on a test case with grid size 257x257 it takes 40
>>>>> seconds  to converge
>>>>> on one processor when I run with
>>>>> ./run -ksp_type richardson -pc_type hypre -pc_type_hypre boomeramg
>>>>>
>>>>> Using the DMMG framework, the same problem takes less than a  
>>>>> second,
>>>>> and the default gmres solver uses only four seconds.
>>>>>
>>>>> Am I somehow using the solver the wrong way, or is this
>>>>> performance  expected?
>>>>>
>>>>> Regards
>>>>> Knut Erik Teigen
>>>>>
>>>>>
>>>
>>>
>>>
>>
>




More information about the petsc-users mailing list