hypre preconditioners

Barry Smith bsmith at mcs.anl.gov
Tue Jul 14 10:42:58 CDT 2009


    First run the three cases with -log_summary (also -ksp_view to see  
exact solver options that are being used) and send those files. This  
will tell us where the time is being spent; without this information  
any comments are pure speculation. (For example, the "copy" time to  
hypre format is trivial compared to the time to build a hypre  
preconditioner and not the problem).


    What you report is not uncommon; the setup and per iteration cost  
of the hypre preconditioners will be much larger than the simpler  
Jacobi preconditioner.

    Barry

On Jul 14, 2009, at 3:36 AM, Klaij, Christiaan wrote:

>
> I'm solving the steady incompressible Navier-Stokes equations  
> (discretized with FV on unstructured grids) using the SIMPLE  
> Pressure Correction method. I'm using Picard linearization and solve  
> the system for the momentum equations with BICG and for the pressure  
> equation with CG. Currently, for parallel runs, I'm using JACOBI as  
> a preconditioner. My grids typically have a few million cells and I  
> use between 4 and 16 cores (1 to 4 quadcore CPUs on a linux  
> cluster). A significant portion of the CPU time goes into solving  
> the pressure equation. To reach the relative tolerance I need, CG  
> with JACOBI takes about 100 iterations per outer loop for these  
> problems.
>
> In order to reduce CPU time, I've compiled PETSc with support for  
> Hypre and I'm looking at BoomerAMG and Euclid to replace JACOBI as a  
> preconditioner for the pressure equation. With default settings,  
> both BoomerAMG and Euclid greatly reduce the number of iterations:  
> with BoomerAMG 1 or 2 iterations are enough, with Euclid about 10.  
> However, I do not get any reduction in CPU time. With Euclid, CPU  
> time is similar to JACOBI and with BoomerAMG it is approximately  
> doubled.
>
> Is this what one can expect? Are BoomerAMG and Euclid meant for much  
> larger problems? I understand Hypre uses a different matrix storage  
> format, is CPU time 'lost in translation' between PETSc and Hypre  
> for these small problems? Are there maybe any settings I should  
> change?
>
> Chris
>
>
>
>
>
>
>
>
> <mime-attachment.jpeg><mime-attachment.jpeg>
> dr. ir. Christiaan Klaij
> CFD Researcher
> Research & Development
> MARIN
> 2, Haagsteeg
> c.klaij at marin.nl
> P.O. Box 28
> T +31 317 49 39 11
> 6700 AA  Wageningen
> F +31 317 49 32 45
> T  +31 317 49 33 44
> The Netherlands
> I  www.marin.nl
>
>
> MARIN webnews: First AMT'09 conference, Nantes, France, September 1-2
>
>
> This e-mail may be confidential, privileged and/or protected by  
> copyright. If you are not the intended recipient, you should return  
> it to the sender immediately and delete your copy from your system.
>



More information about the petsc-users mailing list