On Wed, Jun 17, 2009 at 1:51 PM, Yann Tambouret <span dir="ltr"><<a href="mailto:yannpaul@bu.edu">yannpaul@bu.edu</a>></span> wrote:<br><div class="gmail_quote"><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
Hi,<br>
<br>
I'm new to PETSc and I'm try to solve a linear system that has nine terms per equation, along the diagonal. I'd like to use a bluegene machine to solve for a large number of unknowns (10's of millions). Does anyone have advice for such problem? Which algorithms scale best on such a machine?</blockquote>
<div><br>We have run problems on BG/P with hundreds of millions of unknowns. However, the algorithm/solver is always highly<br>system dependent. The right idea is to run lots of small examples, preferably on your laptop to understand the system you<br>
want to solve. Then scale up in stages. At each stage, something new usually becomes important, and you handle that<br>aspect. The right thing to look at is the output of -log_summary and the iteration counts.<br><br> Matt<br>
</div><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;"><br>
I can of course provide more detail, so please don't hesitate to ask.<br>
<br>
Thanks,<br><font color="#888888">
<br>
Yann<br></font></blockquote></div>-- <br>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener<br>