[petsc-dev] Pushing non-working code

Matthew Knepley knepley at gmail.com
Sun Feb 3 12:01:35 CST 2013


On Sun, Feb 3, 2013 at 12:49 PM, Jed Brown <jedbrown at mcs.anl.gov> wrote:

>
> On Sun, Feb 3, 2013 at 11:31 AM, Matthew Knepley <knepley at gmail.com>wrote:
>
>>  How is _not typing 'hg merge' or 'hg pull'_ harder than typing it? Do
>>> your work in a bookmark and merge it when it's ready for review. It's not a
>>> hard concept.
>>>
>>
>> It is clearly more work.
>>
>
> How so? Every (non-science) open source project I've worked with expects
> this sort of workflow.
>
>
>>
>>
>>>
>>>>
>>>>>  There are no "gains" from a baseline. This is
>>>>>> a point I have made multiple times. Changes must be justified.
>>>>>>
>>>>>
>>>>> I provided a long list of justifications that you have not responded
>>>>> to. There is a great deal of empirical evidence to back my claims.
>>>>>
>>>>
>>>> I have responded to each and every point carefully. You need to listen.
>>>
>>>
>>> You have not said anything about reviewability, actual bug rates,
>>> extensibility, ability to recognize distinct features in the history, or
>>> realized and perceived stability and lack of spurious warnings when users
>>> pull petsc-dev.
>>>
>>
>> Reviewability: I have responded that a) you are reviewing at the wrong
>> time, and b) your second example was a perfectly reviewable checkin which
>> resulted in an easy fix.
>>
>
>> Actual bug rates:  you have not offered any evidence here, so my
>> assertion is that they do not decrease
>>
>
> [citation needed]
>
> http://kev.inburke.com/kevin/the-best-ways-to-find-bugs-in-your-code/
>

Notice here that "Informal code review" is the second worst form of
checking by percentage of bugs
found, and thus would appear to be an inefficient way to go about things.
Formal code review does not
happen every checkin, but only after a significant amount of work goes into
the project. He makes a
big deal about catching bugs early, which is a good point, but not the
point here.


>
> http://en.wikipedia.org/wiki/Code_review
> http://www.codinghorror.com/blog/2006/01/code-reviews-just-do-it.html
>
>
>> Extensibility: Your assertion is that extensibility benefits from code
>> review. I agree. You are reviewing at the wrong time. Code reviews
>> should be organized, not carried out after every checkin.
>>
>
> My view is that the history should _be the organization_. If you want to
> organize it differently, please suggest an alternative system.
>

Formal code review, which we had to do at Akamai, were organized meetings
with all the developers, where
we got a schedule of code to review, for what functionality, and with what
aim in mind.


>
>> Ability to recognize distinct features in history: I do not think this is
>> worth preserving at the cost of a lot more process. This is the linux model
>> where everything is smoothed out into a series of clean patches. We have
>> explicitly chosen not to do this. It obscures the actual development
>> process and I am not convinced it is as useful here as they claim in the
>> kernel.
>>
>
> The amount of process is proportional to the required stability and number
> of developers and users. A bug in kernel 'master' is an extremely bad
> thing. Even when fixed the same day, it can cost many thousands of dollars
> (it interrupts development for many developers and tends to discourage
> early adopters). PETSc is not such critical infrastructure, but I think it
> would be good to offer a more reliable low-latency way to interact with
> applications, which in turn requires more stability. Also, bugs that make
> it into a release are much more expensive than bugs fixed early, even
> without counting the indirect effect of discouraging users.
>
> I'm not proposing the full kernel workflow, but having slightly better
> feature provenance and encouraging up-front review would be good.
>

I am not proposing anarchy. I think everyone should push code with no
warnings. We see from the nightly
tests that it does not happen, but I am sure a timeline will reveal it is
much better today. I agree that people
should try to keep features together in coherent checkins. I label all my
checkins at the top with the components
that they impact. However, imposing an iron rule seems unwarranted, and I
think its clear that hard rules and
lots of process can have bad side effects (witness the reasonable sounding
rule to MS that all checkins must
pass the test suite, which resulted in horrible hacking to accomplish that).


>
>> Warnings: I did respond to that, so its not worth repeating here.
>>
>
> Only that you "try not to".
>

What else could I possibly promise? Promising infallibility shows a lack of
understanding of the problem.

   Matt

-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20130203/7c55e727/attachment.html>


More information about the petsc-dev mailing list