[Lustre-discuss] obdfilter-survey customization variables

Cliff White Cliff.White at Sun.COM
Mon Nov 19 14:01:50 PST 2007


Sébastien Buisson wrote:
> Hello Cliff,
> 
> Thanks for the information.
> But I am wondering: in the 'disk' case, can we compare the number of 
> threads used with a number of clients? or is it more like the number of 
> threads running on the OSS and processing read and write requests? or 
> maybe it is something completely different?
> My question is somehow the same for the 'disk', 'network' and 'netdisk' 
> cases. For each case, what does the thread number represent or simulate 
> if we want to make an analogy with a fully functional Lustre filesystem?

Well, it is sort of like the number of clients. The test walks through 
two ranges, regions and threads. Regions are offsets on the disk, 
causing disk seeks. The combination of the two can be considered a 
client workload. How it maps to a real client workload would depend on 
what the real workload does. The threads in this case are running on the 
client so there is a rough equality.
cliffw

> 
> Regards,
> Sebastien.
> 
> 
> Cliff White a écrit :
>> Sébastien Buisson wrote:
>>> Hello everyone,
>>>
>>> I am looking for some information about some obdfilter-survey 
>>> customization variables. The variables I am considering are thrlo and 
>>> thrhi.
>>>
>>> The fact is that I could not find any precise documentation about the 
>>> meaning of these variables. Does it depend on the case ('disk', 
>>> 'network', 'netdisk') ?
>>>
>> Thrlo and thrhi determine the number of threads tested. In a perfect 
>> world, IO should increase as you add threads. Once you reach the maximum 
>> throughput for your hardware, IO should stay steady as you add threads.
>> The test will start with thrlo threads, each test pass the number of 
>> threads are doubled until reaching thrhi.
>>
>> The values you use should not depend on the test case. We usually adjust 
>> based on the hardware - a big SMP system will support more threads.
>>
>> cliffw
>>
>>> Thanks in advance.
>>> Sebastien.
>>>
>>> _______________________________________________
>>> Lustre-discuss mailing list
>>> Lustre-discuss at clusterfs.com
>>> https://mail.clusterfs.com/mailman/listinfo/lustre-discuss
>>
> 
> _______________________________________________
> Lustre-discuss mailing list
> Lustre-discuss at clusterfs.com
> https://mail.clusterfs.com/mailman/listinfo/lustre-discuss




More information about the lustre-discuss mailing list