[petsc-dev] plans for testing examples?

Gerard Gorman g.gorman at imperial.ac.uk
Mon Sep 17 16:43:32 CDT 2012


Jed Brown emailed the following on 17/09/12 15:36:
> On Mon, Sep 17, 2012 at 9:08 AM, Barry Smith <bsmith at mcs.anl.gov
> <mailto:bsmith at mcs.anl.gov>> wrote:
>
>      How do you map the loops in some of the shell scripts in the
>     makefiles?
>
>
> Not all the loop constructs are parsed now, but I can add that in an
> hour or so. It'll look like
>
> with Executable('ex19.c'):
>   for mtype in 'aij baij sbaij'.split():
>     for vecscatter in 'rsend ssend alltoall'.split():
>       Test(id=('thename', mtype, vecscatter), args='-mat_type %s
> -vecscatter_%s' % (mtype,vecscatter), compare='ex19_thename')
>
> This registers "separate tests" for each, but they all compare against
> the same reference output. We can then run this group by globbing
>
> ./ptest.py test 'ex19_thename_*'
>
> or a single one by
>
> ./ptest.py test ex19_thename_baij_ssend
>

Will/Do these scripts return $? != 0, print "fail" or something else
consistent upon failure? I can appreciate you want the results outputted
in some kind of database for interrogation (would be nice to see if
commits improved/worsened code performance) but a consistent signal for
test failure would make it easy to also integrate it with buildbot or other.

Let me know when it's ready for playing with and I'll hook it up with my
local buildbot.

Cheers
Gerard




More information about the petsc-dev mailing list