On Fri, Dec 4, 2009 at 10:42 PM, Barry Smith <span dir="ltr"><<a href="mailto:bsmith@mcs.anl.gov">bsmith@mcs.anl.gov</a>></span> wrote:<br><div class="gmail_quote"><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<br>
Suggestion:<br>
<br>
1) Discard PETSc<br>
2) Develop a general Py{CL, CUDA, OpenMP-C} system that dispatches "tasks" onto GPUs and multi-core systems (generally we would have one python process per compute node and local parallelism would be done via the low-level kernels to the cores and/or GPUs.)<br>
3) Write a new PETSc using MPI4py and 2) written purely in Python 3000 using all its cool class etc features<br>
4) Use other packages (like f2py) to generate bindings so that 3) maybe called from Fortran90/2003 and C++ (these probably suck since people usually think the other way around; calling Fortran/C++ from Python, but maybe we shouldn't care; we and our friends can just be 10 times more efficient developing apps in Python).<br>
<br>
enjoy coding much better than today.<br>
<br>
What is wrong with Python 3000 that would make this approach not be great?<br></blockquote><div><br>I am very a big fan of this approach. Let me restate it:<br><br> a) Write the initial code in Python for correctness checking, however develop a performance model which will allow transition to an accelerator<br>
<br> b) Move key pieces to a faster platform using<br><br> i) Cython<br><br> ii) PyCUDA<br><br> c) Coordinate loose collection of processes with MPI for large problems<br><br>A few comments. Notice that for many people c) is unnecessary if you can coordinate several GPUs from one CPU. The<br>
key piece here is a dispatch system. Felipe, Rio, and I are getting this done now. Second, we can leverage all of petc4py<br>in step b.<br><br>In my past attempts at this development model, they have always floundered on inner loops or iterations. These cannot be<br>
done in Python (too slow) and cannot be wrapped (too much overhead). However, now we have a way to do this, namely<br>RunTime Code Generation (like PyCUDA). I think this will get us over the hump, but we have to rethink how we code things,<br>
especially traversals, which now become lists of scheduled tasks as in FLASH (from van de Geijn).<br><br> Matt<br> </div><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<br>
Barry<br>
<br>
When coding a new numerical algorithm for PETSc we would just code in Python, then when tested and happy with reimplement in Py{{CL, CUDA, OpenMP-C}<br>
<br>
The other choice is designing and implementing our own cool/great OO language with the flexibilty and power we want, but I fear that is way to hard and why not instead leverage Python.<br>
<br>
<br>
<br>
</blockquote></div><br><br clear="all"><br>-- <br>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener<br>