[petsc-users] Preconditioning systems of equations with complex numbers

Smith, Barry F. bsmith at mcs.anl.gov
Fri Feb 1 16:34:08 CST 2019


  Some of us have found that overlapping additive Schwarz can (at least sometimes) work well (has good scalability and good solution time) for this kind of problem. You can try -pc_type asm 

  Also look at changing the overlap from the default -pc_asm_overlap 2

  Also maybe try direct solvers on the subdomains -sub_pc_type lu

  If you have large problems on each process you might consider using more subdomains than processors with -pc_asm_local_blocks 

  Of course if GAMG is able to get good convergence rates consistently than it is fine to stick with that.

  Barry



  

> On Feb 1, 2019, at 1:38 AM, Justin Chang via petsc-users <petsc-users at mcs.anl.gov> wrote:
> 
> So my matrix doesn't quite look like that, since I am solving a three-phase unbalanced system. Here's an example of what my matrix looks like for the IEEE-4 bus system with three-phases (so 12 total dofs): 
> 
> Mat Object: 1 MPI processes
>   type: seqaij
> row 0: (0, 344.553 - 1240.43 i) (1, 30.7034 + 10.9125 i) (2, 31.1021 + 10.5924 i) (3, -1.28446 + 2.66353 i) (4, 0.623954 - 0.914098 i) (5, 0.225182 - 0.593995 i)
> row 1: (0, 30.7034 + 10.9125 i) (1, 344.705 - 1240.53 i) (2, 30.945 + 10.7235 i) (3, 0.623954 - 0.914098 i) (4, -1.43666 + 2.76012 i) (5, 0.382357 - 0.725045 i)
> row 2: (0, 31.1021 + 10.5924 i) (1, 30.945 + 10.7235 i) (2, 344.423 - 1240.34 i) (3, 0.225182 - 0.593995 i) (4, 0.382357 - 0.725045 i) (5, -1.15424 + 2.57087 i)
> row 3: (0, -1.28446 + 2.66353 i) (1, 0.623954 - 0.914098 i) (2, 0.225182 - 0.593995 i) (3, 1.38874 - 3.28923 i) (4, -0.623954 + 0.914098 i) (5, -0.225182 + 0.593995 i) (6, -0.312601 + 1.8756 i)
> row 4: (0, 0.623954 - 0.914098 i) (1, -1.43666 + 2.76012 i) (2, 0.382357 - 0.725045 i) (3, -0.623954 + 0.914098 i) (4, 1.54094 - 3.38582 i) (5, -0.382357 + 0.725045 i) (7, -0.312601 + 1.8756 i)
> row 5: (0, 0.225182 - 0.593995 i) (1, 0.382357 - 0.725045 i) (2, -1.15424 + 2.57087 i) (3, -0.225182 + 0.593995 i) (4, -0.382357 + 0.725045 i) (5, 1.25852 - 3.19657 i) (8, -0.312601 + 1.8756 i)
> row 6: (3, -0.312601 + 1.8756 i) (6, 1.96462 - 7.75312 i) (7, -0.499163 + 0.731278 i) (8, -0.180145 + 0.475196 i) (9, -1.02757 + 2.13082 i) (10, 0.499163 - 0.731279 i) (11, 0.180145 - 0.475196 i)
> row 7: (4, -0.312601 + 1.8756 i) (6, -0.499163 + 0.731278 i) (7, 2.08637 - 7.8304 i) (8, -0.305885 + 0.580036 i) (9, 0.499163 - 0.731279 i) (10, -1.14932 + 2.2081 i) (11, 0.305885 - 0.580036 i)
> row 8: (5, -0.312601 + 1.8756 i) (6, -0.180145 + 0.475196 i) (7, -0.305885 + 0.580036 i) (8, 1.86044 - 7.679 i) (9, 0.180145 - 0.475196 i) (10, 0.305885 - 0.580036 i) (11, -0.923391 + 2.0567 i)
> row 9: (6, -1.02757 + 2.13082 i) (7, 0.499163 - 0.731279 i) (8, 0.180145 - 0.475196 i) (9, 1.3396 - 2.28195 i) (10, -0.499163 + 0.731278 i) (11, -0.180145 + 0.475196 i)
> row 10: (6, 0.499163 - 0.731279 i) (7, -1.14932 + 2.2081 i) (8, 0.305885 - 0.580036 i) (9, -0.499163 + 0.731278 i) (10, 1.46136 - 2.35922 i) (11, -0.305885 + 0.580036 i)
> row 11: (6, 0.180145 - 0.475196 i) (7, 0.305885 - 0.580036 i) (8, -0.923391 + 2.0567 i) (9, -0.180145 + 0.475196 i) (10, -0.305885 + 0.580036 i) (11, 1.23543 - 2.20782 i)
> 
> Both GAMG and ILU are nice and dandy for this, but as soon as I look at a bigger system, like a network with 8500 buses, the out-of-box gamg craps out. I am not sure where to start when it comes to tuning GAMG. Attached is the ksp monitor/view output for gamg on the unsuccessful solve
> 
> I'm also attaching a zip file which contains the simple PETSc script that loads the binary matrix/vector as well as two test cases, if you guys are interested in trying it out. It only works if you have PETSc configured with complex numbers.
> 
> Thanks
> 
> Justin
> 
> PS - A couple years ago I had asked if there was a paper/tutorial on using/tuning GAMG. Does such a thing exist today?
> 
> On Thu, Jan 31, 2019 at 5:00 PM Matthew Knepley <knepley at gmail.com> wrote:
> On Thu, Jan 31, 2019 at 6:22 PM Justin Chang <jychang48 at gmail.com> wrote:
> Here's IMHO the simplest explanation of the equations I'm trying to solve:
> 
> http://home.eng.iastate.edu/~jdm/ee458_2011/PowerFlowEquations.pdf
> 
> Right now we're just trying to solve eq(5) (in section 1), inverting the linear Y-bus matrix. Eventually we have to be able to solve equations like those in the next section.
> 
> Maybe I am reading this wrong, but the Y-bus matrix looks like an M-matrix to me (if all the y's are positive). This means
> that it should be really easy to solve, and I think GAMG should do it. You can start out just doing relaxation, like SOR, on
> small examples.
> 
>   Thanks,
> 
>     Matt
>  
> On Thu, Jan 31, 2019 at 1:47 PM Matthew Knepley <knepley at gmail.com> wrote:
> On Thu, Jan 31, 2019 at 3:20 PM Justin Chang via petsc-users <petsc-users at mcs.anl.gov> wrote:
> Hi all,
> 
> I'm working with some folks to extract a linear system of equations from an external software package that solves power flow equations in complex form. Since that external package uses serial direct solvers like KLU from suitesparse, I want a proof-of-concept where the same matrix can be solved in PETSc using its parallel solvers. 
> 
> I got mumps to achieve a very minor speedup across two MPI processes on a single node (went from solving a 300k dog system in 1.8 seconds to 1.5 seconds). However I want to use iterative solvers and preconditioners but I have never worked with complex numbers so I am not sure what the "best" options are given PETSc's capabilities.
> 
> So far I tried GMRES/BJACOBI and it craps out (unsurprisingly). I believe I also tried BICG with BJACOBI and while it did converge it converged slowly. Does anyone have recommendations on how one would go about preconditioning PETSc matrices with complex numbers? I was originally thinking about converting it to cartesian form: Declaring all voltages = sqrt(real^2+imaginary^2) and all angles to be something like a conditional arctan(imaginary/real) because all the papers I've seen in literature that claim to successfully precondition power flow equations operate in this form.
> 
> 1) We really need to see the (simplified) equations
> 
> 2) All complex equations can be converted to a system of real equations twice as large, but this is not necessarily the best way to go
> 
>  Thanks,
> 
>     Matt
>  
> Justin
> 
> 
> -- 
> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.
> -- Norbert Wiener
> 
> https://www.cse.buffalo.edu/~knepley/
> 
> 
> -- 
> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.
> -- Norbert Wiener
> 
> https://www.cse.buffalo.edu/~knepley/
> <pcgamg_ksp_monitor.txt><solveYmat.zip>



More information about the petsc-users mailing list