[petsc-dev] plans for testing examples?

Gerard Gorman g.gorman at imperial.ac.uk
Mon Sep 17 18:25:15 CDT 2012


Jed Brown emailed the following on 17/09/12 23:49:
> On Mon, Sep 17, 2012 at 5:28 PM, Gerard Gorman
> <g.gorman at imperial.ac.uk <mailto:g.gorman at imperial.ac.uk>> wrote:
>
>     So if I understand you correctly I should start a python session
>     (handy
>     since buildbot is all python), import in your test harness and then
>     start calling tests. If this is the case (kick it back if I've
>     misunderstood) then iterating through a list of tests would seem to be
>     the way to go. This would give maximum flexibility to filter tests
>     into
>     categories (for example MPI tests might need to be handled differently
>     than serial tests).
>
>
> So I'm planning to provide a rich API for filtering/selection. I see
> that as a key motivation for doing this system in the first place. Of
> course you can run a query and get back a list of test objects, but
> you could also just say "run everything matching this query and report
> the result". In that case, you would get called back with each
> completed test.
>  
>
>     This also would give plenty of flexibility to do
>     whatever with the output - eg might write out all
>     -log_summary/stdout/stderr to buildbot log in case of error and drop
>     otherwise.
>
>
> The callback will be provided with all that information.
>  
>
>     Have you a plan for how to handle machine specific ways to run
>     mpiexec/queueing etc? Are you going to recycle methods from the
>     BuildSystem or will there be a mechanism to supply my own mpiexec or
>     batch queueing harness?
>
>
> There will be a runner interface. You'll be able to write a plugin to
> run tests using some custom submission system. It will report back
> return code, stderr, and stdout.
>

Sounds great.

> In terms of Python library dependencies, I would really like to use
> gevent. It's portable and provides a very good parallel abstraction.
> It does use a (tiny) C extension module. Is that dependency going to
> be a deal-breaker?

Distributed with Ubuntu/debian so cannot see any problem with this.


> Since the testing dependency graph can (and generally will) be
> constructed without needing to do anything asynchronously, the gevent
> issues above are not nearly as acute. We could use a basic thread pool
> and have each thread take items out of the queue. The GIL is released
> when they do IO so this is fine. For configure, the dependency graph
> is potentially dynamic, in which case a system that can manage many
> suspended contexts would be ideal. I've done simple implementations
> with several systems and the gevent code is both cleanest and fastest
> so that's what I've started with.

Sounds good. Let me know when there is a branch I can try hooking up.

Cheers
Gerard




More information about the petsc-dev mailing list