[petsc-dev] Regression tests

Gerard Gorman g.gorman at imperial.ac.uk
Sun Jul 29 12:50:30 CDT 2012


Matthew Knepley emailed the following on 28/07/12 21:52:
> On Sat, Jul 28, 2012 at 6:53 AM, Gerard Gorman
> <g.gorman at imperial.ac.uk <mailto:g.gorman at imperial.ac.uk>> wrote:
>
>     Slightly off topic - but I have Buildbot set up locally to keep an eye
>     on petsc-dev and various branches of petsc that I care about (if
>     you are
>     curious see http://stereolab.ese.ic.ac.uk:8080/waterfall - knock
>     yourself out with the force build button if you wish)
>
>     Because this is a bot, 99% of time I only care about "pass",
>     "fail" ($?
>     != 0 is counted as a fail), "warning". For the testharness in our own
>     application code we just parse the output of unit tests for "pass",
>     "fail" and "warning" and list the tests that gave a warning or fail at
>     the end so a developer can take a closer look. I would like to do a
>     similar grepping for PETSc tests. Are the tests messages
>     standardised so
>     I can do this? At first glance I have:
>
>     "pass" == "run successfully with"
>     "warning" == "Possible problem with"
>     "fail" == "mpiexec not found" or $? != 0
>
>
> I think you want
>
>   http://petsc.cs.iit.edu/petsc/petsc-dev/annotate/c11230be07bd/bin/maint/checkBuilds.py
>
> for builds, and what you have for tests.
>
>    Matt
>

Thanks

Gerard



More information about the petsc-dev mailing list