[Lustre-discuss] coverage measurement for some lustre test suites

Gearing, Chris chris.gearing at intel.com
Mon Jul 23 05:19:27 PDT 2012


>> On 2012-07-18, at 6:49 AM, Roman Grigoryev wrote:
>>> we did some work for collecting code coverage which generate some Lustre
>>> tests.
>>>
>>> Results are available on opensfs site
>>> http://www.opensfs.org/foswiki/bin/view/Lustre/CodeCoverage ,
>>> please look and comment it.
>>
>> This looks quite interesting.  I guess the next step is to figure out
>> which functions are receiving no coverage during testing, and write
>> tests to exercise them.
>
> This is the most important one of possible ways for using this info.
>
> It is possible to use this info for finding probably death code. It will
> be  good to check it with static analyzer like Coverity or Polyspace too.
>
> Also, I think, it could be helpful to know which test cover which code
> (for example for developers, for quick testing new code after fix before
> uploading for review/wide testing and maybe put changes to these tests). I
> have coverage by test and have script for searching test by code line.
> In future, we could create script for searching tests with maximum
> codecoverage(and probably we could update SLOW subset) and searching tests
> with  minimum coverage or duplicate coverage  and possible refactor them
>
> Maybe this can be used for proving orientation of benchmark/stress tests
> when we will have coverage for them.
>
>>
>> I suspect one type of function which gets very little testing is the
>> lprocfs functions.  Writing a simple test to read all of the functions
>> would get coverage, but it would be better to have actual tests that
>> verify the content of the files is actually correct.
>
> Absolutely agree that  just coverage is not enough for prove quality. In
> coverage theory percent of covered set of all combination of execution
> paths should be used for proving quality but this 'spherical cow' for the
> most of application.
>
> I think,  in real life we should use rule like this 'dummy coverage is
> better than zero coverage but coverage which provide real use case is
> better dummy coverage'.
>
> Ideally, I prefer to have real use cases in tests, dummy coverage in unit
> tests and also use advanced static code analyzers for all code.
>
> --
> Thanks,
>        Roman

I think this is very valuable data and something we should update over time so 
that we can see which way it is trending. Hopefully the effort of a repeat run
is much less than the original which I sure was quite significant.

On the subject of coverage many people are sceptical of code coverage results,
and whilst it is true to that that 100% coverage doesn't mean everything is
tested, 0% does mean nothing is tested. I make this point because what we can
say is that overtime the % coverages should all be increasing because it's
difficult to image a case where less coverage is better testing.

Returning to the specifics of this set, you presumably have the
report currently on opensfs for any given test, i.e. we could have page that listed
all ofthe lustre suites/tests that can then be clicked through to the underlying
report. Is that the case and if would you be able to publish it.

In fact as you say flipping the tree on it's head it should be possible to click
on any line and find out which tests exercise that line, which would be cool. Of
course we have to be careful here that data paths are well as code paths need to
be tested for any change, and data paths are real tricky to map.

Thanks

Chris
---------------------------------------------------------------------
Intel Corporation (UK) Limited
Registered No. 1134945 (England)
Registered Office: Pipers Way, Swindon SN3 1RJ
VAT No: 860 2173 47

This e-mail and any attachments may contain confidential material for
the sole use of the intended recipient(s). Any review or distribution
by others is strictly prohibited. If you are not the intended
recipient, please contact the sender and delete all copies.




More information about the lustre-discuss mailing list