[Lustre-discuss] lru_size very small

Brock Palen brockp at umich.edu
Sat Aug 23 06:01:24 PDT 2008


Great!

So I read this as being lru_size no-longer needs to be manually  
adjusted.  Thats great!
Thanks!

Brock Palen
www.umich.edu/~brockp
Center for Advanced Computing
brockp at umich.edu
(734)936-1985



On Aug 23, 2008, at 7:22 AM, Andreas Dilger wrote:
> On Aug 22, 2008  15:39 -0400, Brock Palen wrote:
>> It looks like lru_size is not a static parameter.  While on most of
>> our hosts it starts as zero.  Once the file system is accessed some
>> the values start to rise.  The values get highest for the MDS.
>>
>> cat nobackup-MDT0000-mdc-000001022c433800/lru_size
>>   3877
>
> Yes, in 1.6.5 instead of having a static LRU size it is dynamic based
> on load.  This optimizes the number of locks available to nodes that
> have very different workloads than others (e.g. login/build nodes vs.
> compute nodes vs. backup nodes).
>
>> So in 1.6.5.1  are lock dynamically adjusted based on ram available
>> on the MDS/OSS's?  Notice how the value above is _much_ higher than
>> the default '100' in the manual.
>
> The total number of locks available are now a function of the RAM
> on the server.  I think the maximum is 50 locks/MB, but this is
> hooked into the kernel VM so that in case of too much memory pressure
> then the LRU size is shrunk.
>
>> I should point out this value was 0  till I did a   'find . | wc -l'
>> in a directory.  The same is for regular access.  users on nodes that
>> access lustre have locks.  Nodes that have not had lustre access yet
>> are still 0  (by access I mean an application that uses our lustre
>> mount vs our NFS mount.)
>>
>> Any feedback on the nature of locks and lru_size?
>> We are looking to do what the manual says about upping the number on
>> the login nodes.
>
> Yes, the manual needs an update.
>
> Cheers, Andreas
> --
> Andreas Dilger
> Sr. Staff Engineer, Lustre Group
> Sun Microsystems of Canada, Inc.
>
>
>




More information about the lustre-discuss mailing list