<html><head><base href="x-msg://527/"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><div>Hi,</div><div><br></div><div>Not on the PETSc team but some experience with these two multilevel preconditioners. For starters take a look at this publication by one</div><div>of the HYPRE team members on parameter choices for 2D and 3D Poisson problems that deliver the best performance. Pay particular</div><div>attention to p. 18-22. There are many knobs with these solvers (in particular BoomerAMG) and they may need tweaking to improve </div><div>performance.</div><div><br></div><a href="https://computation.llnl.gov/casc/linear_solvers/pubs/yang1.pdf">https://computation.llnl.gov/casc/linear_solvers/pubs/yang1.pdf</a><div><br></div><div><a href="https://computation.llnl.gov/casc/linear_solvers/pubs/yang1.pdf"></a>Also, what is your definition of poor scalability? With respect to increasing processor count (i.e., parallel scalability) or with respect to</div><div>performance based on increasing problem size? Both of these preconditiioners have been thoroughly tested for Poisson-style</div><div>problems and I'd be surprised if you don't get (at least) good scalability with respect to problem size?</div><div><br></div><div>Travis</div><div><br></div><div><div>
<span class="Apple-style-span" style="font-size: 12px; "><div>^^^^^^^^^^^^^^^^^^^^^^^^^^^^^</div><div>Travis Austin, Ph.D.</div><div style="margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; ">Tech-X Corporation</div><div style="margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; ">5621 Arapahoe Ave, Suite A</div><div style="margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; ">Boulder, CO 80303</div><div><a href="mailto:austin@txcorp.com">austin@txcorp.com</a></div><div>^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^</div><br class="Apple-interchange-newline"></span>
</div>
<br><div><div>On May 19, 2011, at 5:56 PM, Li, Zhisong (lizs) wrote:</div><br class="Apple-interchange-newline"><blockquote type="cite"><span class="Apple-style-span" style="border-collapse: separate; font-family: Helvetica; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: 2; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-border-horizontal-spacing: 0px; -webkit-border-vertical-spacing: 0px; -webkit-text-decorations-in-effect: none; -webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; font-size: medium; "><div ocsi="0" fpstyle="1"><div style="direction: ltr; font-family: Tahoma; color: rgb(0, 0, 0); font-size: 10pt; ">Hi, Petsc Team,<br><br>Recently I tested my 3D structured Poisson-style problem with ML and BoomerAMG preconditioner respectively. In comparison, ML is more efficient in preconditioning and RAM usage, but it requires 2 times more iterations on the same KSP solver, bringing down the overall efficiency. And both PCs don't scale well. I wonder if there's any specific approach to optimizing ML to reduce KSP iterations by setting certain command line options.<br><br>I also saw in some previous petsc mail archives mentioning the "local preconditioner". As some important PC like PCILU and PCICC are not available for parallel processing, it may be beneficial to apply them as local preconditioners. The question is how to setup a local preconditioner?<br><br><br>Thank you veru much.<br><br><br><br>Zhisong Li<br></div></div></span></blockquote></div><br></div></body></html>