<div dir="ltr"><div dir="ltr"><br></div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div><br></div><div>Both GAMG and ILU are nice and dandy for this, </div></div></div></div></blockquote><div><br></div><div>I would test Richardson/SOR and Chebyshev/Jacobi on the tiny system and converge it way down, say rtol = 1.e-12. See which one is better in the early iteration and pick it. It would be nice to check that it solves the problem ...</div><div><br></div><div>The residual drops about 5 orders in the first iteration and then flatlines. That is very bad. Check that the smoother can actually solve the problem. </div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div>but as soon as I look at a bigger system, like a network with 8500 buses, the out-of-box gamg craps out. I am not sure where to start when it comes to tuning GAMG. </div></div></div></div></blockquote><div><br></div><div>First try -pc_gamg_nsmooths 0</div><div><br></div><div>You can run with -info and grep on GAMG to see some diagnostic output. Eigen estimates are fragile in practice and with this parameter and SOR there are no eigen estimates needed. The max eigen values for all levels should be between say 2 (or less) and say 3-4. Much higher is a sign of a problem.</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div>Attached is the ksp monitor/view output for gamg on the unsuccessful solve</div><div><br></div><div>I'm also attaching a zip file which contains the simple PETSc script that loads the binary matrix/vector as well as two test cases, if you guys are interested in trying it out. It only works if you have PETSc configured with complex numbers.</div><div><br></div><div>Thanks</div><div><br></div><div>Justin</div><div><br></div><div>PS - A couple years ago I had asked if there was a paper/tutorial on using/tuning GAMG. Does such a thing exist today?</div></div></div></div></blockquote><div><br></div><div>There is a write up in the manual that is tutorial like.</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Thu, Jan 31, 2019 at 5:00 PM Matthew Knepley <<a href="mailto:knepley@gmail.com" target="_blank">knepley@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr">On Thu, Jan 31, 2019 at 6:22 PM Justin Chang <<a href="mailto:jychang48@gmail.com" target="_blank">jychang48@gmail.com</a>> wrote:<br></div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr">Here's IMHO the simplest explanation of the equations I'm trying to solve:<div><br></div><div><a href="http://home.eng.iastate.edu/~jdm/ee458_2011/PowerFlowEquations.pdf" target="_blank">http://home.eng.iastate.edu/~jdm/ee458_2011/PowerFlowEquations.pdf</a><br></div><div><br></div><div>Right now we're just trying to solve eq(5) (in section 1), inverting the linear Y-bus matrix. Eventually we have to be able to solve equations like those in the next section.</div></div></div></blockquote><div><br></div><div>Maybe I am reading this wrong, but the Y-bus matrix looks like an M-matrix to me (if all the y's are positive). This means</div><div>that it should be really easy to solve, and I think GAMG should do it. You can start out just doing relaxation, like SOR, on</div><div>small examples.</div><div><br></div><div> Thanks,</div><div><br></div><div> Matt</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Thu, Jan 31, 2019 at 1:47 PM Matthew Knepley <<a href="mailto:knepley@gmail.com" target="_blank">knepley@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr">On Thu, Jan 31, 2019 at 3:20 PM Justin Chang via petsc-users <<a href="mailto:petsc-users@mcs.anl.gov" target="_blank">petsc-users@mcs.anl.gov</a>> wrote:<br></div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">Hi all,<div><br></div><div>I'm working with some folks to extract a linear system of equations from an external software package that solves power flow equations in complex form. Since that external package uses serial direct solvers like KLU from suitesparse, I want a proof-of-concept where the same matrix can be solved in PETSc using its parallel solvers. </div><div><br></div><div>I got mumps to achieve a very minor speedup across two MPI processes on a single node (went from solving a 300k dog system in 1.8 seconds to 1.5 seconds). However I want to use iterative solvers and preconditioners but I have never worked with complex numbers so I am not sure what the "best" options are given PETSc's capabilities.</div><div><br></div><div>So far I tried GMRES/BJACOBI and it craps out (unsurprisingly). I believe I also tried BICG with BJACOBI and while it did converge it converged slowly. Does anyone have recommendations on how one would go about preconditioning PETSc matrices with complex numbers? I was originally thinking about converting it to cartesian form: Declaring all voltages = sqrt(real^2+imaginary^2) and all angles to be something like a conditional arctan(imaginary/real) because all the papers I've seen in literature that claim to successfully precondition power flow equations operate in this form.</div></div></blockquote><div><br></div><div>1) We really need to see the (simplified) equations</div><div><br></div><div>2) All complex equations can be converted to a system of real equations twice as large, but this is not necessarily the best way to go</div><div><br></div><div> Thanks,</div><div><br></div><div> Matt</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div>Justin</div></div>
</blockquote></div><br clear="all"><div><br></div>-- <br><div dir="ltr" class="gmail-m_-7895652953510261891gmail-m_-5927997594778934562gmail-m_4182101500591969802gmail-m_-7596872824010350421gmail-m_8922927719976317655gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener</div><div><br></div><div><a href="http://www.cse.buffalo.edu/~knepley/" target="_blank">https://www.cse.buffalo.edu/~knepley/</a><br></div></div></div></div></div></div></div></div>
</blockquote></div>
</blockquote></div><br clear="all"><div><br></div>-- <br><div dir="ltr" class="gmail-m_-7895652953510261891gmail-m_-5927997594778934562gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener</div><div><br></div><div><a href="http://www.cse.buffalo.edu/~knepley/" target="_blank">https://www.cse.buffalo.edu/~knepley/</a><br></div></div></div></div></div></div></div></div>
</blockquote></div>
</blockquote></div></div>