[petsc-dev] plans for testing examples?

Gerard Gorman g.gorman at imperial.ac.uk
Mon Sep 17 17:28:37 CDT 2012


Jed Brown emailed the following on 17/09/12 22:50:
> On Mon, Sep 17, 2012 at 4:43 PM, Gerard Gorman
> <g.gorman at imperial.ac.uk <mailto:g.gorman at imperial.ac.uk>> wrote:
>
>     Will/Do these scripts return $? != 0, print "fail" or something else
>     consistent upon failure?
>
>
> Absolutely. I'll also make a way to associate actions with completion
> so you could get your python module called.n
>  
>
>     I can appreciate you want the results outputted
>     in some kind of database for interrogation (would be nice to see if
>     commits improved/worsened code performance) but a consistent
>     signal for
>     test failure would make it easy to also integrate it with buildbot
>     or other.
>
>
> Would you prefer that my code runs the tests and reports completion
> status to you (via Python callback which you can run shell command
> from if you like) or would you rather query a list of all tests, then
> each one individually?

So if I understand you correctly I should start a python session (handy
since buildbot is all python), import in your test harness and then
start calling tests. If this is the case (kick it back if I've
misunderstood) then iterating through a list of tests would seem to be
the way to go. This would give maximum flexibility to filter tests into
categories (for example MPI tests might need to be handled differently
than serial tests). This also would give plenty of flexibility to do
whatever with the output - eg might write out all
-log_summary/stdout/stderr to buildbot log in case of error and drop
otherwise.

Have you a plan for how to handle machine specific ways to run
mpiexec/queueing etc? Are you going to recycle methods from the
BuildSystem or will there be a mechanism to supply my own mpiexec or
batch queueing harness?

Cheers
Gerard




More information about the petsc-dev mailing list