[Lustre-discuss] [Discuss] coverage measurement at 2012 09 15
roman_grigoryev at xyratex.com
Mon Oct 8 07:12:01 PDT 2012
On 09/29/2012 04:24 PM, Dilger, Andreas wrote:
> Hi Roman,
> The coverage data is interesting. It would be even more useful to be able
> to compare it to the previous code coverage run, if they used the same
> method for measuring coverage (the new report states that the method has
> changed and reduced coverage).
On page http://www.opensfs.org/foswiki/bin/view/Lustre/CodeCoverage I
make something like to a history. The page is maintained manually. In my
I observe mainly pretty small coverage changes by lustre/tests code
updates. The most of coverage difference is going from improving
collecting process by me.
Next steps ,which I
want to do, is removing from my report testing binaries (for example
lustre/tests) and include more
suites to execution.
For publishing regular measurements we (xyratex, maybe other) also
should solve some technical issues:
- where/how deploy results?
- how make history diagrams?
- publishing or not raw coverage results (if yes -where?)
internally Jenkins and http sharing serve us in these tasks.
> Are the percentages if code coverage getting better or worse? Are there
> particular areas of the code that have poor coverage that could benefit
> from some focussed attention with new tests?
it is possible to answer(more or less precisely) on the last question
via looking current coverage report.
> I can definitely imagine that many error handling code paths (e.g.
> checking for allocation failures) would not be exercised without specific
> changes (see e.g. my unlanded patch to fix the OBD_ALLOC() failure
> injection code).
Absolutely agree that some paths could not bee executed in a regular
environment. Often it is error or constrain
processing code (call it"error-processing" code) . I think, metric like
"non-error-processing" code could
be interesting and useful and could be interpreted of coverage of
"often-used" or "positive" code.
In terms of quality, I prefer to set higher priority to a exist
not-detected bug in "non-error-processing" code in comparing
with same bug in "error-processing" code. Maybe it is good idea for
marking somehow "error-processing" or "hard-to-execute"
code and have report with excluded this code.
In more modern languages this code often in "catch" block in exceptions
and these code block could be tested via
unit test. There is question: where should it be tested - in unit or
functional tests? Testing this code in unit tests often is simpler.
> Running a test with periodic random allication failures enabled and fixing
> the resulting bugs would improve coverage, though not in a systematic way
> that could be measured/repeated. Still, this would find a class if
> hard-to-find bugs.
> Similarly, running racer for extended periods is a good form of coverage
> generation, even if not systematic/repeatable. I think the racer code
> could be improved/extended by adding racet scripts that are
> Lustre-specific or exercise new functionality (e.g. "lfs setstripe",
> setfattr, getfattr, setfacl, getfacl). Running multiple racer instances on
> multiple clients/mounts and throwing recovery into the mix would
> definitely find new bugs.
There emerge a tricky thing. We could have a test which generate not
regularly repeatable coverage.
Should we or not include the test to regular report? I think , no.
Because we want to have repeatable
result for continuously evaluate coverage. But, i think, the test
coverage could rare evaluated separately and
we could create some prediction of his coverage and include it to full
> In general, having the code coverage is a good starting point, but it
> isn't necessarily useful if nothing is done to improve the coverage of the
> tests as a result.
> Cheers, Andreas
> On 2012-09-20, at 7:21, Roman Grigoryev <Roman_Grigoryev at xyratex.com>
>> next coverage measurement published,
>> please see
>> Entrance page
>> discuss mailing list
>> discuss at lists.opensfs.org
More information about the lustre-discuss