[petsc-dev] broken nightlybuilds (next vs next-tmp)

Smith, Barry F. bsmith at mcs.anl.gov
Sat Nov 11 14:52:18 CST 2017



> On Nov 11, 2017, at 11:49 AM, Jed Brown <jed at jedbrown.org> wrote:
> 
> Matthew Knepley <knepley at gmail.com> writes:
> 
>> On Sat, Nov 11, 2017 at 12:37 PM, Satish Balay <balay at mcs.anl.gov> wrote:
>> 
>>> On Sat, 11 Nov 2017, Matthew Knepley wrote:
>>> 
>>>>> In the long term - Barry wants to get rid of next..
>>>> 
>>>> 
>>>> 1) I think next really prevents master from getting screwed up (witness
>>>> next)
>>>> 
>>>> 2) I think we are actually finding interaction bugs there.
>>>> 
>>>> Are those points wrong, or is there another way to do these things?
>>> 
>>> Next is an intergration testing mechanism. The prerequisite for it [I
>>> think] is - one should test the branch properly before merging to
>>> next. However we are not doing proper testing before merge to next -
>>> and relying on next to do this part aswell.
>>> 
>>> So with current next - it one has to figure out which branches are
>>> breaking the tests [takes time - which most of us are not doing] - and
>>> then hope it gets fixed quickly. Otherwise next stay broken for a long
>>> time [and other branches in next - which could be clean - don't
>>> receive sufficient confidence to graduate to master]
>>> 
>>> So Barry's thought wrt getting next is to have a better way of testing
>>> feature branch one wants to test (i.e master+feature). Its not clear
>>> to me how many integration issues we've addressed with our current
>>> next model. [Its mostly been indvidual branch issues]
>>> 
>>> Also if feature-1 and feature-2 are feature branches that are tested
>>> in next [wrt integration]. The following should be equivalent to
>>> testing 'master + feature1 + feature2' - aka current next model:
>>> 
>>> 1. test master+feature1
>>> 2. success => merge feature1 to master
>>> 3. tets master+ feature2
>>> 3. success => merge feature2 to master
>>> 
>>> Note: my next-tmp is an attempt to get closer to 'master+feature1'
>>> testing from 'master+feature1+feature2' testing [yeah - its more like
>>> master+2/3 branches in next-tmp vs master+10/15 branches in next.]
>>> 
>>> Also I'm restarting next-tmp from a clean master when merging new set
>>> of branches to test. And throwing away branches with problems - and
>>> retest only after it has fixes [This way a broken branch does not keep
>>> next-tmp broken until it gets fixed.]
>> 
>> 
>> I don't think we have the resources to run full tests on every branch one
>> at a time. Do we?
> 
> No, and after each merge of a branch to 'master', the prospective merge
> of other branches would need to be retested.  But the idea that the
> automated test suite is infallible is also flawed.

   Of course it is flawed. But our current next model is flawed in exactly the same way.




More information about the petsc-dev mailing list