<div dir="ltr"><div dir="ltr"><div dir="ltr">So my matrix doesn't quite look like that, since I am solving a three-phase unbalanced system. Here's an example of what my matrix looks like for the IEEE-4 bus system with three-phases (so 12 total dofs): <div><br></div><div><div>Mat Object: 1 MPI processes</div><div> type: seqaij</div><div>row 0: (0, 344.553 - 1240.43 i) (1, 30.7034 + 10.9125 i) (2, 31.1021 + 10.5924 i) (3, -1.28446 + 2.66353 i) (4, 0.623954 - 0.914098 i) (5, 0.225182 - 0.593995 i)</div><div>row 1: (0, 30.7034 + 10.9125 i) (1, 344.705 - 1240.53 i) (2, 30.945 + 10.7235 i) (3, 0.623954 - 0.914098 i) (4, -1.43666 + 2.76012 i) (5, 0.382357 - 0.725045 i)</div><div>row 2: (0, 31.1021 + 10.5924 i) (1, 30.945 + 10.7235 i) (2, 344.423 - 1240.34 i) (3, 0.225182 - 0.593995 i) (4, 0.382357 - 0.725045 i) (5, -1.15424 + 2.57087 i)</div><div>row 3: (0, -1.28446 + 2.66353 i) (1, 0.623954 - 0.914098 i) (2, 0.225182 - 0.593995 i) (3, 1.38874 - 3.28923 i) (4, -0.623954 + 0.914098 i) (5, -0.225182 + 0.593995 i) (6, -0.312601 + 1.8756 i)</div><div>row 4: (0, 0.623954 - 0.914098 i) (1, -1.43666 + 2.76012 i) (2, 0.382357 - 0.725045 i) (3, -0.623954 + 0.914098 i) (4, 1.54094 - 3.38582 i) (5, -0.382357 + 0.725045 i) (7, -0.312601 + 1.8756 i)</div><div>row 5: (0, 0.225182 - 0.593995 i) (1, 0.382357 - 0.725045 i) (2, -1.15424 + 2.57087 i) (3, -0.225182 + 0.593995 i) (4, -0.382357 + 0.725045 i) (5, 1.25852 - 3.19657 i) (8, -0.312601 + 1.8756 i)</div><div>row 6: (3, -0.312601 + 1.8756 i) (6, 1.96462 - 7.75312 i) (7, -0.499163 + 0.731278 i) (8, -0.180145 + 0.475196 i) (9, -1.02757 + 2.13082 i) (10, 0.499163 - 0.731279 i) (11, 0.180145 - 0.475196 i)</div><div>row 7: (4, -0.312601 + 1.8756 i) (6, -0.499163 + 0.731278 i) (7, 2.08637 - 7.8304 i) (8, -0.305885 + 0.580036 i) (9, 0.499163 - 0.731279 i) (10, -1.14932 + 2.2081 i) (11, 0.305885 - 0.580036 i)</div><div>row 8: (5, -0.312601 + 1.8756 i) (6, -0.180145 + 0.475196 i) (7, -0.305885 + 0.580036 i) (8, 1.86044 - 7.679 i) (9, 0.180145 - 0.475196 i) (10, 0.305885 - 0.580036 i) (11, -0.923391 + 2.0567 i)</div><div>row 9: (6, -1.02757 + 2.13082 i) (7, 0.499163 - 0.731279 i) (8, 0.180145 - 0.475196 i) (9, 1.3396 - 2.28195 i) (10, -0.499163 + 0.731278 i) (11, -0.180145 + 0.475196 i)</div><div>row 10: (6, 0.499163 - 0.731279 i) (7, -1.14932 + 2.2081 i) (8, 0.305885 - 0.580036 i) (9, -0.499163 + 0.731278 i) (10, 1.46136 - 2.35922 i) (11, -0.305885 + 0.580036 i)</div><div>row 11: (6, 0.180145 - 0.475196 i) (7, 0.305885 - 0.580036 i) (8, -0.923391 + 2.0567 i) (9, -0.180145 + 0.475196 i) (10, -0.305885 + 0.580036 i) (11, 1.23543 - 2.20782 i)</div></div><div><br></div><div>Both GAMG and ILU are nice and dandy for this, but as soon as I look at a bigger system, like a network with 8500 buses, the out-of-box gamg craps out. I am not sure where to start when it comes to tuning GAMG. Attached is the ksp monitor/view output for gamg on the unsuccessful solve</div><div><br></div><div>I'm also attaching a zip file which contains the simple PETSc script that loads the binary matrix/vector as well as two test cases, if you guys are interested in trying it out. It only works if you have PETSc configured with complex numbers.</div><div><br></div><div>Thanks</div><div><br></div><div>Justin</div><div><br></div><div>PS - A couple years ago I had asked if there was a paper/tutorial on using/tuning GAMG. Does such a thing exist today?</div></div></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Thu, Jan 31, 2019 at 5:00 PM Matthew Knepley <<a href="mailto:knepley@gmail.com">knepley@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr">On Thu, Jan 31, 2019 at 6:22 PM Justin Chang <<a href="mailto:jychang48@gmail.com" target="_blank">jychang48@gmail.com</a>> wrote:<br></div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr">Here's IMHO the simplest explanation of the equations I'm trying to solve:<div><br></div><div><a href="http://home.eng.iastate.edu/~jdm/ee458_2011/PowerFlowEquations.pdf" target="_blank">http://home.eng.iastate.edu/~jdm/ee458_2011/PowerFlowEquations.pdf</a><br></div><div><br></div><div>Right now we're just trying to solve eq(5) (in section 1), inverting the linear Y-bus matrix. Eventually we have to be able to solve equations like those in the next section.</div></div></div></blockquote><div><br></div><div>Maybe I am reading this wrong, but the Y-bus matrix looks like an M-matrix to me (if all the y's are positive). This means</div><div>that it should be really easy to solve, and I think GAMG should do it. You can start out just doing relaxation, like SOR, on</div><div>small examples.</div><div><br></div><div> Thanks,</div><div><br></div><div> Matt</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Thu, Jan 31, 2019 at 1:47 PM Matthew Knepley <<a href="mailto:knepley@gmail.com" target="_blank">knepley@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr">On Thu, Jan 31, 2019 at 3:20 PM Justin Chang via petsc-users <<a href="mailto:petsc-users@mcs.anl.gov" target="_blank">petsc-users@mcs.anl.gov</a>> wrote:<br></div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><div dir="ltr">Hi all,<div><br></div><div>I'm working with some folks to extract a linear system of equations from an external software package that solves power flow equations in complex form. Since that external package uses serial direct solvers like KLU from suitesparse, I want a proof-of-concept where the same matrix can be solved in PETSc using its parallel solvers. </div><div><br></div><div>I got mumps to achieve a very minor speedup across two MPI processes on a single node (went from solving a 300k dog system in 1.8 seconds to 1.5 seconds). However I want to use iterative solvers and preconditioners but I have never worked with complex numbers so I am not sure what the "best" options are given PETSc's capabilities.</div><div><br></div><div>So far I tried GMRES/BJACOBI and it craps out (unsurprisingly). I believe I also tried BICG with BJACOBI and while it did converge it converged slowly. Does anyone have recommendations on how one would go about preconditioning PETSc matrices with complex numbers? I was originally thinking about converting it to cartesian form: Declaring all voltages = sqrt(real^2+imaginary^2) and all angles to be something like a conditional arctan(imaginary/real) because all the papers I've seen in literature that claim to successfully precondition power flow equations operate in this form.</div></div></blockquote><div><br></div><div>1) We really need to see the (simplified) equations</div><div><br></div><div>2) All complex equations can be converted to a system of real equations twice as large, but this is not necessarily the best way to go</div><div><br></div><div> Thanks,</div><div><br></div><div> Matt</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div>Justin</div></div>
</blockquote></div><br clear="all"><div><br></div>-- <br><div dir="ltr" class="gmail-m_-5927997594778934562gmail-m_4182101500591969802gmail-m_-7596872824010350421gmail-m_8922927719976317655gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener</div><div><br></div><div><a href="http://www.cse.buffalo.edu/~knepley/" target="_blank">https://www.cse.buffalo.edu/~knepley/</a><br></div></div></div></div></div></div></div></div>
</blockquote></div>
</blockquote></div><br clear="all"><div><br></div>-- <br><div dir="ltr" class="gmail-m_-5927997594778934562gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener</div><div><br></div><div><a href="http://www.cse.buffalo.edu/~knepley/" target="_blank">https://www.cse.buffalo.edu/~knepley/</a><br></div></div></div></div></div></div></div></div>
</blockquote></div>