[Lustre-discuss] Metadata storage in test script files

Roman Grigoryev Roman_Grigoryev at xyratex.com
Thu May 3 02:17:30 PDT 2012


Hi,

On 05/02/2012 11:01 PM, Andreas Dilger wrote:
> I'm chopping out most of the discussion, to try and focus on the core issues here.
> 
> On 2012-05-02, at 10:35 AM, Roman Grigoryev wrote:
>> On 05/02/2012 01:25 PM, Chris wrote:
>>> I cannot suppose if you should store this information with your results
>>> because I have no insight into your private testing practices.
>>
>> I just want to have info not only in maloo or other big systems but in
>> default test harness. Developers can run results by hand, tester also
>> should have possibility to execute in specific environment. If we can
>> provides some helpful info - i think it is good. few kilobytes is not so
>> match as logs, but can help in some cases.
> 
> I don't think you two are in disagreement here.  We want the test descriptions and other 
> metadata with the tests, open for any usage (human, test scripts, different test harnesses, etc).

I absolute agree. My point is just about form: machine usage need formal
description of fields and tools for simple check it.

> 
>>> I don't think people should introduce dependencies either, but they have and we have to deal with that fact. In your example
>>> if C is dependent on A and A is removed then C cannot be run.
>>
>> Maybe I'm incorrect, but fight with dependencies looks like more
>> important then adding descriptions.
> 
> For the short term.  However, finding dependencies is easily done through simple mechanical steps (e.g. try to run each subtest
> independently).  Since the policy in the past was to make all tests independent, I expect that not very many tests will actually
> have dependencies.

Just now I'm working on this task.

> 
> However, the main reason for having good descriptions of the tests is to gain an understanding of what part of the
> code the tests are trying to exercise, what problem they were written to verify, and what value they provide. 
> We cannot reasonably rewrite or modify tests safely if we don't have a good understanding of what they are doing today.
> Also, this helps people running and debugging the tests and their failures for the long term.

I absolute agree with common target and text descriptions for humans. I
just don't really see why test refactoring and test understanding
(creating summary-descriptions) cannot be combine into one. (Also I have
feeling that developer will find many errors when go around test for get
description. I have some experience in same tasks and could say that
fresh look to old tests often find problems.).

-- 
Thanks,
	Roman





More information about the lustre-discuss mailing list