On Sat, Dec 27, 2008 at 5:28 PM, Tomasz Jankowski <span dir="ltr"><<a href="mailto:tomjan@jay.au.poznan.pl">tomjan@jay.au.poznan.pl</a>></span> wrote:<br><div class="gmail_quote"><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
Hello,<br>
<br>
I'm not even beginner user of PETSC (about three years ago I have only successfully passed by - quick installation of petsc + quick diskless cluster setup + few nights of old code adjustment + few months of computations), but I have question to PETSC developers (which is result of Christmas meditations or rather Christmas surfeit ;-) ).<br>
<br>
Two weeks ago I have taken place in cell processor programming workshop. It seems that NEW is coming to us. It seems that never ending cpu clock picking up has just finished. It seems that if we will need speed up of computations in hpc we will need to switch - start writing & new rewrite our old soft! - to such specialized processors like powerxcell or developed by<br>
clearspeed math coprocessors. So my question is: could someone of PETSC<br>
developers comment this subject?<br>
Are you going to port PETSC to one of such emerging architectures or maybe to all? Is it fast&easy or another missionimpossible? or maybe it is only commercial propaganda of guys from ibm and NEW is NOT comming to us...?</blockquote>
<div><br>My feeling is that these architectures will indeed to important, however this view is not universal. Notice<br>that the coprocessor idea has been with us since the beginning of computing, and each time it is sold as<br>
the new revolution, only to disappoint its adherents. I think this time is different because I have seen<br>real performance benefit and new thinking about software design, mainly from Robert van de Geijn<br>at UT Austin (FLAME project).<br>
<br>I see a lot of benefit to running hierarchical, block algorithms on a GPU. However, sparse MatMult() is<br>a lost cause. Thus if the major cost is sparse matvec, change your algorithm. I truly believe that Krylov<br>accelerators will fade back into the background as we get high quality implementations of FFT, MG,<br>
FMM, Wavelets, and other order N methods. These can be fixed up by relatively solver Krylov wrappers<br>on the outside.<br><br> Matt<br> </div><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<br>
tom<br>
<br>
########################################################<br>
# <a href="mailto:tomjan@jay.au.poznan.pl" target="_blank">tomjan@jay.au.poznan.pl</a> #<br>
# <a href="http://jay.au.poznan.pl/%7Etomjan/" target="_blank">jay.au.poznan.pl/~tomjan/</a> #<br>
########################################################<br></blockquote></div>-- <br>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>
-- Norbert Wiener<br>