[Lustre-discuss] [Discuss] coverage measurement for some lustre test suites

Andreas Dilger adilger at whamcloud.com
Sun Jul 22 22:41:40 PDT 2012

On 2012-07-22, at 1:24, "Roman Grigoryev" <Roman_Grigoryev at xyratex.com> wrote:
> Andreas Dilger <adilger at whamcloud.com> wrote at  Sat, 21 Jul 2012 21:35:50  
> +0400:
>> On 2012-07-18, at 6:49 AM, Roman Grigoryev wrote:
>>> we did some work for collecting code coverage which generate some Lustre
>>> tests.
>>> Results are available on opensfs site
>>> http://www.opensfs.org/foswiki/bin/view/Lustre/CodeCoverage ,
>>> please look and comment it.
>> This looks quite interesting.  I guess the next step is to figure out  
>> which functions are receiving no coverage during testing, and write  
>> tests to exercise them.
> This is the most important one of possible ways for using this info.
> It is possible to use this info for finding probably death code. It will  
> be  good to check it with static analyzer like Coverity or Polyspace too.

The problem I've seen with Coverity is that the output is not intended to be exported outside their web interface. There are a couple of free code analysis tools that could be integrated with the build system more easily, such as "sparse" and "clang".

The problem with starting to use any static analysis tools is that someone has to put in the time and effort to clean up the existing warnings/errors and any false positives. At that point, they could be enabled for continuous use going forward. 

> Also, I think, it could be helpful to know which test cover which code  
> (for example for developers, for quick testing new code after fix before  
> uploading for review/wide testing and maybe put changes to these tests). I  
> have coverage by test and have script for searching test by code line.
> In future, we could create script for searching tests with maximum  
> code coverage (and probably we could update SLOW subset)

Agreed - this would be useful. It would also be good if there was an automated way to know if newly added tests are covering new code as intended. 

> and searching tests with minimum coverage or duplicate coverage and possible refactor them. 

In some cases, it isn't clear that straight code coverage is enough, because it also doesn't check the different states the system might be in at the time. 

> Maybe this can be used for proving orientation of benchmark/stress tests  
> when we will have coverage for them.
>> I suspect one type of function which gets very little testing is the  
>> lprocfs functions.  Writing a simple test to read all of the functions  
>> would get coverage, but it would be better to have actual tests that  
>> verify the content of the files is actually correct.
> Absolutely agree that  just coverage is not enough for prove quality. In  
> coverage theory percent of covered set of all combination of execution  
> paths should be used for proving quality but this 'spherical cow' for the  
> most of application.
> I think,  in real life we should use rule like this 'dummy coverage is  
> better than zero coverage but coverage which provide real use case is  
> better dummy coverage'.
> Ideally, I prefer to have real use cases in tests, dummy coverage in unit  
> tests and also use advanced static code analyzers for all code.
> -- 
> Thanks,
>    Roman
>> Cheers, Andreas
>> --
>> Andreas Dilger                       Whamcloud, Inc.
>> Principal Lustre Engineer            http://www.whamcloud.com/

More information about the lustre-discuss mailing list