[petsc-users] Hypre - AMS / CG

Barry Smith bsmith at mcs.anl.gov
Thu Oct 29 10:27:36 CDT 2015


  I am including the hypre issue tracker on this discussion since they obviously know much more about their solvers than we do.

  Since you have markedly different qualitative behavior between the sequential (where hypre seems fine) and the parallel, my guess is that there is an error in your generation of the problem you pass to hypre in parallel. I recommend running your code with a very small matrix on 1 and 2 processes and using MatView() or -mat_view  and other command line options to make sure that you are providing the EXACT same matrix on 1 and 2 processes to hypre. 

You can also use MatView() with a binary file and MatLoad() to generate the matrix on one process and then load it on to two processes and run the matrix that worked on one process on two processes, does it's convergence fall apart on 2 processes (this would indicate an issue with hypre) or does it have similar convergence on 2?

   Barry

> On Oct 29, 2015, at 10:10 AM, Vincent Huber <vincent.huber at cemosis.fr> wrote:
> 
> Hello all,
> 
> I want to solve the following system [1]
> ∇ × 1/μ ∇ × u + ∇ P = f
> ∇ ⋅ u = 0
> with μ​ = cst.
> 
> This produce, in the preconditioner, two systems to solve. The first one is related to AMS (with a non null β​), the second one is a laplacian.
> 
> I use Hypre/Ams as a preconditioner to solve the first system.
> 
> I have verified my implementation to solve the full problem in sequential, using the default parameters, on 3D academic test cases. I obtain the second order accuracy as expected.
> 
> On some parallel cases, the solver related to the first system (after few iterations) does not converge (Indefinite matrix or Indefinite preconditioner). If I generate [2] - from the same mesh - a new partitionning, I can obtain convergence (but that is not always true)
> I have implemented my own version of the ams preconditioner following [1] and the system (slowly) converge.
> I use {Hypre-AMS OR my own implementation}/CG to solve the corresponding system.
> If I switch from hypre-ams/CG to hypre-ams/gmres, then I obtain - at least for that case - convergence.
> If I use my own implementation/CG, then I obtain - at least for that case - convergence (but that is very slow !)
> 
> My questions are:
> 
> 	• why does the hypre-ams preconditioner loose the SDP property ?
> 	• Do I miss anything else ?
> I have tried various cycle-type and smoothing options in the hypre-ams preconditioner with CG without success.
> 
> Vincent H
> 
> [1]For more details, see Parallel numerical solution of the time-harmonic Maxwell equations in mixed form
> [2] gmsh my.msh -3 -part n -o new.msh
> 
>> Docteur Ingénieur de rechercheCeMoSiS - vincent.huber at cemosis.fr
> Tel: +33 (0)3 68 85 02 06
> IRMA - 7, rue René Descartes
> 67 000 Strasbourg
> 
> 
> 



More information about the petsc-users mailing list