[petsc-dev] Regression tests

Matthew Knepley knepley at gmail.com
Sat Jul 28 15:52:50 CDT 2012


On Sat, Jul 28, 2012 at 6:53 AM, Gerard Gorman <g.gorman at imperial.ac.uk>wrote:

> Slightly off topic - but I have Buildbot set up locally to keep an eye
> on petsc-dev and various branches of petsc that I care about (if you are
> curious see http://stereolab.ese.ic.ac.uk:8080/waterfall - knock
> yourself out with the force build button if you wish)
>
> Because this is a bot, 99% of time I only care about "pass", "fail" ($?
> != 0 is counted as a fail), "warning". For the testharness in our own
> application code we just parse the output of unit tests for "pass",
> "fail" and "warning" and list the tests that gave a warning or fail at
> the end so a developer can take a closer look. I would like to do a
> similar grepping for PETSc tests. Are the tests messages standardised so
> I can do this? At first glance I have:
>
> "pass" == "run successfully with"
> "warning" == "Possible problem with"
> "fail" == "mpiexec not found" or $? != 0
>

I think you want


http://petsc.cs.iit.edu/petsc/petsc-dev/annotate/c11230be07bd/bin/maint/checkBuilds.py

for builds, and what you have for tests.

   Matt

Cheers
> Gerard
>
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20120728/1dbcf39d/attachment.html>


More information about the petsc-dev mailing list