Poor performance with BoomerAMG?

Barry Smith bsmith at mcs.anl.gov
Wed Feb 20 18:57:57 CST 2008


On Feb 20, 2008, at 2:54 PM, jens.madsen at risoe.dk wrote:

> Thank you Barry. I'll take a look at it:-)
>
> Did you have any summerschool suggestions?

   Sorry I don't know of any,

    Barry

>
>
> Kind Regards
>
> Jens
>
> -----Original Message-----
> From: owner-petsc-users at mcs.anl.gov [mailto:owner-petsc-users at mcs.anl.gov 
> ] On Behalf Of Barry Smith
> Sent: Wednesday, February 20, 2008 12:04 AM
> To: petsc-users at mcs.anl.gov
> Subject: Re: Poor performance with BoomerAMG?
>
>
>   Trottenberg has a discussion page 178; see the box that begins at
> the bottom of the page and
> continues onto the next one). See also the discussion at the bottom of
> page 182 with equations
> 5.6.14 and 5.6.15,
>
>   I totally disagree with his suggestion of interpolating boundary
> nodes differently from
> interior nodes. It makes the code unnecessarily complicated. So long
> as you have the
> boundary equations suitably scaled you can simply interpolate
> everywhere identically.
>
>   Barry
>
>
>
> On Feb 19, 2008, at 8:21 AM, jens.madsen at risoe.dk wrote:
>
>> Hi Barry
>>
>> Two questions.
>>
>> 1) What do you mean with "volume" and "wrong scaling"? Could
>> translate this to some other terms? I have a book by Ulrich
>> Trottenberg "Multigrid" and the book by Saad, but could not find
>> similar.
>>
>> 2) Do you know of any summerschools in scientific computing,
>> focusing on Krylov methods, multigrids and preconditioning(all
>> parallel)?
>>
>> Kind Regards
>>
>> Jens Madsen
>> Ph.d.-studerende
>> Phone direct +45 4677 4560
>> Mobile
>> jens.madsen at risoe.dk
>>
>> Optics and Plasma Research Department
>> Risø National Laboratory
>> Technical University of Denmark - DTU
>> Building 128, P.O. Box 49
>> DK-4000 Roskilde, Denmark
>> Tel +45 4677 4500
>> Fax +45 4677 4565
>> www.risoe.dk
>>
>> From 1 January 2007, Risø National Laboratory, the Danish Institute
>> for Food and Veterinary Research,
>> the Danish Institute for Fisheries Research, the Danish National
>> Space Center and
>> the Danish Transport Research Institute have been merged with
>> the Technical University of Denmark (DTU) with DTU as the continuing
>> unit.
>> -----Original Message-----
>> From: owner-petsc-users at mcs.anl.gov [mailto:owner-petsc-users at mcs.anl.gov
>> ] On Behalf Of Barry Smith
>> Sent: Saturday, February 16, 2008 6:49 PM
>> To: petsc-users at mcs.anl.gov
>> Subject: Re: Poor performance with BoomerAMG?
>>
>>
>>   All multigrid solvers depend on proper scaling of the variables.
>> For example
>> for a  Laplacian operator the matrix entries are
>>
>>        \integral \grad \phi_i dot \grad \phi_j
>>
>> now in 2d \grad \phi is O(1/h) and the volume is O(h^2) so the terms
>> in the matrix are O(1). In 3d \grad \phi is still O(1/h) but the
>> volume is O(h^3)
>> meaning the matrix entries are O(h).  Now say you impose a Dirichlet
>> boundary
>> conditions by just saying u_k    =  g_k. In 2d this is ok but in 3d
>> you need to
>> use h*u_k = h*g_k otherwise when you restrict to the coarser grid the
>> resulting matrix entries for the boundary are "out of whack" with the
>> matrix
>> entries for the interior of the domain.
>>
>> Actually most preconditioners and Krylov methods behavior does depend
>> on the row scaling; multigrid is just particularly sensitive.
>>
>>   Barry
>>
>>
>> On Feb 15, 2008, at 5:36 PM, Andrew T Barker wrote:
>>
>>>
>>>
>>>> Be careful how you handle boundary conditions; you need to make  
>>>> sure
>>>> they have the same scaling as the other equations.
>>>
>>> Could you clarify what you mean?  Is boomerAMG sensitive to scaling
>>> of matrix rows in a way that other solvers/preconditioners are not?
>>>
>>> Andrew
>>>
>>>>
>>>> On Feb 15, 2008, at 8:36 AM, knutert at stud.ntnu.no wrote:
>>>>
>>>>> Hi Ben,
>>>>>
>>>>> Thank you for answering. With gmres and boomeramg I get a run time
>>>>> of
>>>>> 2s, so that is much better. However, if I increase the grid size  
>>>>> to
>>>>> 513x513, I get a run time of one minute. With richardson, it fails
>>>>> to converge.
>>>>> LU gives 6 seconds, CG and ICC gives 7s, and the DMMG solver 3s  
>>>>> for
>>>>> the 513x513 problem.
>>>>>
>>>>> When using the DMMG framework, I just used the default solvers.
>>>>> I use the Galerkin process to generate the coarse matrices for
>>>>> the multigrid cycle.
>>>>>
>>>>> Best,
>>>>> Knut
>>>>>
>>>>> Siterer Ben Tay <zonexo at gmail.com>:
>>>>>
>>>>>> Hi Knut,
>>>>>>
>>>>>> I'm currently using boomeramg to solve my poisson eqn too. I'm
>>>>>> using it
>>>>>> on my structured C-grid. I found it to be faster than LU,
>>>>>> especially as
>>>>>> the grid size increases. However I use it as a preconditioner  
>>>>>> with
>>>>>> GMRES as the solver. Have you tried this option? Although it's
>>>>>> faster,
>>>>>> the speed increase is usually less than double. It seems to be
>>>>>> worse if
>>>>>> there is a lot of stretching in the grid.
>>>>>>
>>>>>> Btw, your mention using the DMMG framework and it takes less
>>>>>> than a
>>>>>> sec. What solver or preconditioner did you use? It's 4 times
>>>>>> faster
>>>>>> than GMRES...
>>>>>>
>>>>>> thanks!
>>>>>>
>>>>>> knutert at stud.ntnu.no wrote:
>>>>>>> Hello,
>>>>>>>
>>>>>>> I am trying to use the hypre multigrid solver to solve a Poisson
>>>>>>> equation.
>>>>>>> However, on a test case with grid size 257x257 it takes 40
>>>>>>> seconds  to converge
>>>>>>> on one processor when I run with
>>>>>>> ./run -ksp_type richardson -pc_type hypre -pc_type_hypre
>>>>>>> boomeramg
>>>>>>>
>>>>>>> Using the DMMG framework, the same problem takes less than a
>>>>>>> second,
>>>>>>> and the default gmres solver uses only four seconds.
>>>>>>>
>>>>>>> Am I somehow using the solver the wrong way, or is this
>>>>>>> performance  expected?
>>>>>>>
>>>>>>> Regards
>>>>>>> Knut Erik Teigen
>>>>>>>
>>>>>>>
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>
>>
>
>




More information about the petsc-users mailing list