[petsc-dev] every test example runs in a new directory with new test harness
Matthew Knepley
knepley at gmail.com
Thu Feb 9 18:15:50 CST 2017
On Thu, Feb 9, 2017 at 5:54 PM, Barry Smith <bsmith at mcs.anl.gov> wrote:
>
> I had no idea what a subtest was (horrible name BTW) so I said all
> subtests should have the same output.
>
> I now understand what Scott meant by a subtest but I am still not sure
> that it is a good idea. The benefits/features of subtests are
>
> 1) you don't need to type the base command line arguments multiple times
>
> 2) subtests need to be run sequentially in the same work directory
>
> 3) others?
>
> Now 1) seems like a nice but not crucial feature. 2) seems like a horrible
> idea. 2) seems to exist only so we can have test cases like this particular
> one of yours where the first run generates a file and the second run reads
> in the file. This type of test is useful because it provides some assurance
> that the "writing" actually wrote a suitable file (instead of possible
> garbage that does not get tested). Jed proposed an alternative approach for
> making sure output files are correct that does not require one test case be
> run after another. Instead the test harness does a diff (for example
> hdf5diff) on the output file with a "known" file.
>
> I don't see a downside to Jed's proposal.
So I need to check in this checkpoint file in order to test reading it in,
and also the diffing? I am not sure I like that better.
Shouldn't we design our test system to do this simple thing, rather than
clutter our repository for all time?
Matt
>
> Barry
>
>
>
> > On Feb 9, 2017, at 4:32 PM, Matthew Knepley <knepley at gmail.com> wrote:
> >
> > On Mon, Feb 6, 2017 at 9:15 AM, Scott Kruger <kruger at txcorp.com> wrote:
> >
> > The basic idea of running multiple commands within a single shell
> > script was what I called a subtest (for lack of a better word).
> > So:
> >
> > This almost works. However, the two tests create separate output, but
> the text below checks both
> > runs against the same output. I tried using a "subsuffix", but it did
> not change the output filename.
> >
> > Thanks,
> >
> > Matt
> >
> >
> > test:
> > suffix: restart
> > requires: hdf5
> > args: -run_type test -refinement_limit 0.0 -bc_type dirichlet
> -interpolate 1 -petscspace_order 1
> > test:
> > args: -dm_view hdf5:sol.h5 -vec_view hdf5:sol.h5::append
> > test:
> > args: -f sol.h5 -restart
> >
> > The args in the subtest inherit from the parent test. This seems
> > to be generally useful as a testing idiom in petsc tests as this
> > example nicely shows.
> >
> > Each mpiexec would be tested separately and reported separately.
> > This would give you want you want, and should work as is.
> >
> >
> > Tobin pointed out that I broke the for loops and some of the subtest
> > functionality in some of the other feature implementations. We
> > have come to consensus (right, Tobin?) on the
> > desired functionality and implementation. A pull request
> > is planned this week. It doesn't affect this directly, but
> > should have some minor improvements (like in the reporting).
> >
> > Scott
> >
> >
> > On 2/6/17 7:10 AM, Matthew Knepley wrote:
> > On Mon, Feb 6, 2017 at 1:05 AM, Jed Brown <jed at jedbrown.org
> > <mailto:jed at jedbrown.org>> wrote:
> >
> >
> > Barry Smith <bsmith at mcs.anl.gov <mailto:bsmith at mcs.anl.gov>> writes:
> >
> > > test:
> > > suffix: restart_0
> > > requires: hdf5
> > > args: -run_type test -refinement_limit 0.0 -bc_type
> dirichlet -interpolate 1 -petscspace_order 1 -dm_view hdf5:sol.h5 -vec_view
> hdf5:sol.h5::append
> > >
> > > test:
> > > suffix: restart_1
> > > requires: hdf5
> > > args: -run_type test -refinement_limit 0.0 -bc_type
> dirichlet -interpolate 1 -petscspace_order 1 -f sol.h5 -restart
> > >
> > > See a problem?
> > >
> > > Should the same run of the example view the files and then load
> them back in? versus trying to read in a data file from another run that
> may not even have been created before and even if it was, the file was
> definitely created in a different directory?
> >
> > So if write only is broken, do you want both to fail? I think it's
> > better to read and write separately, with comparison using h5diff,
> since
> > that independently tests read vs write and establishes backward
> > compatibility, which you'd really like the test system to make you
> deal
> > with explicitly.
> >
> >
> > I know the test is broken, but I did already mail the list about this
> > and was waiting for an answer
> > to be worked out.
> >
> > I agree with Satish that running two commands would be great. I could
> > rewrite the example to
> > both write and load it, but it would complicate it. Also, I am trying to
> > get the pattern I expect the
> > user to follow for checkpointing.
> >
> > Matt
> >
> > --
> > What most experimenters take for granted before they begin their
> > experiments is infinitely more interesting than any results to which
> > their experiments lead.
> > -- Norbert Wiener
> >
> > --
> > Tech-X Corporation kruger at txcorp.com
> > 5621 Arapahoe Ave, Suite A Phone: (720) 974-1841
> > Boulder, CO 80303 Fax: (303) 448-7756
> >
> >
> >
> > --
> > What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> > -- Norbert Wiener
>
>
--
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20170209/05a37ff2/attachment.html>
More information about the petsc-dev
mailing list