[petsc-dev] Nightly tests quick summary page

Matthew Knepley knepley at gmail.com
Wed Jan 23 22:03:41 CST 2013


On Wed, Jan 23, 2013 at 9:29 PM, Jed Brown <jedbrown at mcs.anl.gov> wrote:

>
> On Wed, Jan 23, 2013 at 9:08 PM, Matthew Knepley <knepley at gmail.com>wrote:
>
>> I am always skeptical of big programs to overall a large piece of
>> infrastructure that works fairly well.
>>
>> However, there is a really simple thing that we need which would make us
>> much much much better
>> at using our own tests. We need a better way to test numerical output. I
>> am not sure what the right
>> thing to do is, or I would have already done it.
>>
>> The current best solution is to print fewer digits, which judging from
>> the HTML page is not sufficient.
>> I think that current PETSc output is so stylized that parsing output is
>> feasible, and would allow nice
>> diffs with tolerances tailored to the type of output and individual test.
>>
>
> MOOSE puts all their test output into exodus files and uses exodiff. That
> has the advantage of being structured enough that it can be diffed with
> rtol and atol.
>
> OTOH, we have a challenge that's mostly distinct from a discretization
> package. We're not testing error in a discretization (which is unchanging
> as long as the discretization doesn't change), we're testing the
> intermediate, unconverged values, and comparing error using relative
> tolerance (versus absolute tolerance, which would be better).
>
> As we attempt to make our interfaces better for graphical front-ends and
> automatic high-level controllers, I think we should try to use monitors
> that provide structured output. This could be a JSON file with object
> identification and convergence history or perhaps a sqlite database. I
> suspect we could deal with most of our FP-sensitive testing with only a
> handful of structured monitors. Providing this structured output is
> providing an API so we should try to rapidly converge on an extensible data
> model that can be relatively stable.
>

I am torn here. I would bet serious money that I can write a parser for our
current numerical output in 1/10
of the time it takes to write new output/parsers and setup all the
associated infrastructure (databases). If
all we ever do is compare, we should just write the parser. Can you think
of any other value that would come
from JSON test output?

   Matt

-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20130123/c01c9543/attachment.html>


More information about the petsc-dev mailing list