[petsc-dev] testing in parallel
Scott Kruger
kruger at txcorp.com
Mon Apr 29 18:04:47 CDT 2019
FYI -- I have reproduced all the problems but am still looking at it.
I thought perhaps it would be something about the globsearch's
invocation of python, but it's not -- I get the same thing even with
gmake's native filter (and in fact, it appears to be worse).
I'm getting something funny in the counts directory which is where each
individual run stores its output, but I need more testing to figure out
what's going on.
Scott
On 4/22/19 11:00 PM, Jed Brown via petsc-dev wrote:
> I don't know how this would happen and haven't noticed it myself.
> Perhaps Scott can help investigate. It would help to know which tests
> run in each case. To debug, I would make a dry-run or skip-all mode
> that skips actually running the tests and just reports success (or
> skip).
>
> Stefano Zampini <stefano.zampini at gmail.com> writes:
>
>> The print-test target seems ok wrt race conditions
>>
>> [szampini at localhost petsc]$ make -j1 -f gmakefile.test print-test globsearch="mat*" | wc
>> 1 538 11671
>> [szampini at localhost petsc]$ make -j20 -f gmakefile.test print-test globsearch="mat*" | wc
>> 1 538 11671
>>
>> However, if I run the tests, I get two different outputs
>>
>> [szampini at localhost petsc]$ make -j20 -f gmakefile.test test globsearch="mat*"
>> [..]
>> # -------------
>> # Summary
>> # -------------
>> # success 1226/1312 tests (93.4%)
>> # failed 0/1312 tests (0.0%)
>> # todo 6/1312 tests (0.5%)
>> # skip 80/1312 tests (6.1%)
>>
>> [szampini at localhost petsc]$ make -j20 -f gmakefile.test test globsearch="mat*"
>> [..]
>> # -------------
>> # Summary
>> # -------------
>> # success 990/1073 tests (92.3%)
>> # failed 0/1073 tests (0.0%)
>> # todo 6/1073 tests (0.6%)
>> # skip 77/1073 tests (7.2%)
>>
>>> On Apr 22, 2019, at 8:12 PM, Jed Brown <jed at jedbrown.org> wrote:
>>>
>>> Stefano Zampini via petsc-dev <petsc-dev at mcs.anl.gov> writes:
>>>
>>>> Scott,
>>>>
>>>> I have noticed that make -j20 -f gmakefile.test test globsearch="mat*" does
>>>> not always run the same number of tests. How hard is to fix this race
>>>> condition in the generation of the rules?
>>>
>>> Can you reproduce with the print-test target? These are just running
>>> Python to create a list of targets, and should all take place before
>>> executing rules.
--
Tech-X Corporation kruger at txcorp.com
5621 Arapahoe Ave, Suite A Phone: (720) 974-1841
Boulder, CO 80303 Fax: (303) 448-7756
More information about the petsc-dev
mailing list