On Sat, Dec 5, 2009 at 2:29 PM, Ethan Coon <span dir="ltr"><<a href="mailto:etc2103@columbia.edu">etc2103@columbia.edu</a>></span> wrote:<br><div class="gmail_quote"><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
I'm a big fan of Barry's approach as well.<br>
<br>
However, the current state of debugging tools is not up to snuff for<br>
this type of model. In using petsc4py regularly, debugging cython and<br>
python ("user-defined") functions after they've been passed into PETSc<br>
just plain sucks -- it basically means reverting to print statements.<br>
And while unit testing can help for a lot of it, expecting users to<br>
write unit tests for even the most basic of their physics functions is a<br>
little unreasonable. Even basic exception propagation would be a huge<br>
step forward.<br></blockquote><div><br>This is a very interesting issue. Suppose you write the RHSFunction in Python<br>and pass to SNES. Are you saying that pdb cannot stop in that method when<br>you step over SNESSolve() in Python? That would suck. If on the other hand,<br>
you passed in C, I can see how you are relegated to obscure gdb. This happens<br>to me in PyLith.<br><br> Matt<br> </div><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
I'm all in favor of a hybrid model, but the debugging support/issues<br>
would need to be addressed.<br>
<br>
Ethan<br>
<div><div></div><div class="h5"><br>
<br>
On Sat, 2009-12-05 at 12:03 -0800, Brad Aagaard wrote:<br>
> As someone who has a finite-element code built upon PETSc/Sieve with the<br>
> top-level code in Python, I am in favor of Barry's approach.<br>
><br>
> As Matt mentions debugging multi-languages is more complex. Unit testing<br>
> helps solve some of this because tests associated with the low-level<br>
> code involve only one language and find most of the bugs.<br>
><br>
> We started with manual C++/Python interfaces, then moved to Pyrex, and<br>
> now use SWIG. Because we use C++, the OO support in SWIG results in a<br>
> much better simpler, cleaner interface between Python and C++ than what<br>
> is possible with Pyrex or Cython. SWIG has eliminated 95% of the effort<br>
> to interface Python and C++ compared to Pyrex.<br>
><br>
> Brad<br>
><br>
> Matthew Knepley wrote:<br>
> > On Fri, Dec 4, 2009 at 10:42 PM, Barry Smith <<a href="mailto:bsmith@mcs.anl.gov">bsmith@mcs.anl.gov</a><br>
> > <mailto:<a href="mailto:bsmith@mcs.anl.gov">bsmith@mcs.anl.gov</a>>> wrote:<br>
> ><br>
> ><br>
> > Suggestion:<br>
> ><br>
> > 1) Discard PETSc<br>
> > 2) Develop a general Py{CL, CUDA, OpenMP-C} system that dispatches<br>
> > "tasks" onto GPUs and multi-core systems (generally we would have<br>
> > one python process per compute node and local parallelism would be<br>
> > done via the low-level kernels to the cores and/or GPUs.)<br>
> > 3) Write a new PETSc using MPI4py and 2) written purely in Python<br>
> > 3000 using all its cool class etc features<br>
> > 4) Use other packages (like f2py) to generate bindings so that 3)<br>
> > maybe called from Fortran90/2003 and C++ (these probably suck since<br>
> > people usually think the other way around; calling Fortran/C++ from<br>
> > Python, but maybe we shouldn't care; we and our friends can just be<br>
> > 10 times more efficient developing apps in Python).<br>
> ><br>
> > enjoy coding much better than today.<br>
> ><br>
> > What is wrong with Python 3000 that would make this approach not be<br>
> > great?<br>
> ><br>
> ><br>
> > I am very a big fan of this approach. Let me restate it:<br>
> ><br>
> > a) Write the initial code in Python for correctness checking, however<br>
> > develop a performance model which will allow transition to an accelerator<br>
> ><br>
> > b) Move key pieces to a faster platform using<br>
> ><br>
> > i) Cython<br>
> ><br>
> > ii) PyCUDA<br>
> ><br>
> > c) Coordinate loose collection of processes with MPI for large problems<br>
> ><br>
> > A few comments. Notice that for many people c) is unnecessary if you can<br>
> > coordinate several GPUs from one CPU. The<br>
> > key piece here is a dispatch system. Felipe, Rio, and I are getting this<br>
> > done now. Second, we can leverage all of petc4py<br>
> > in step b.<br>
> ><br>
> > In my past attempts at this development model, they have always<br>
> > floundered on inner loops or iterations. These cannot be<br>
> > done in Python (too slow) and cannot be wrapped (too much overhead).<br>
> > However, now we have a way to do this, namely<br>
> > RunTime Code Generation (like PyCUDA). I think this will get us over the<br>
> > hump, but we have to rethink how we code things,<br>
> > especially traversals, which now become lists of scheduled tasks as in<br>
> > FLASH (from van de Geijn).<br>
> ><br>
> > Matt<br>
> ><br>
> ><br>
> ><br>
> > Barry<br>
> ><br>
> > When coding a new numerical algorithm for PETSc we would just code<br>
> > in Python, then when tested and happy with reimplement in Py{{CL,<br>
> > CUDA, OpenMP-C}<br>
> ><br>
> > The other choice is designing and implementing our own cool/great OO<br>
> > language with the flexibilty and power we want, but I fear that is<br>
> > way to hard and why not instead leverage Python.<br>
> ><br>
> ><br>
> ><br>
> ><br>
> ><br>
> ><br>
> > --<br>
> > What most experimenters take for granted before they begin their<br>
> > experiments is infinitely more interesting than any results to which<br>
> > their experiments lead.<br>
> > -- Norbert Wiener<br>
><br>
><br>
</div></div>--<br>
-------------------------------------------<br>
Ethan Coon<br>
DOE CSGF - Graduate Student<br>
Dept. Applied Physics & Applied Mathematics<br>
Columbia University<br>
212-854-0415<br>
<br>
<a href="http://www.ldeo.columbia.edu/%7Eecoon/" target="_blank">http://www.ldeo.columbia.edu/~ecoon/</a><br>
-------------------------------------------<br>
<br>
<br>
</blockquote></div><br><br clear="all"><br>-- <br>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener<br>