[Lustre-discuss] lru_size very small
Brock Palen
brockp at umich.edu
Fri Aug 22 12:39:14 PDT 2008
Replying to my self (without an answer though).
It looks like lru_size is not a static parameter. While on most of
our hosts it starts as zero. Once the file system is accessed some
the values start to rise. The values get highest for the MDS.
cat nobackup-MDT0000-mdc-000001022c433800/lru_size
3877
So in 1.6.5.1 are lock dynamically adjusted based on ram available
on the MDS/OSS's? Notice how the value above is _much_ higher than
the default '100' in the manual.
I should point out this value was 0 till I did a 'find . | wc -l'
in a directory. The same is for regular access. users on nodes that
access lustre have locks. Nodes that have not had lustre access yet
are still 0 (by access I mean an application that uses our lustre
mount vs our NFS mount.)
Any feedback on the nature of locks and lru_size?
We are looking to do what the manual says about upping the number on
the login nodes.
Brock Palen
www.umich.edu/~brockp
Center for Advanced Computing
brockp at umich.edu
(734)936-1985
On Aug 21, 2008, at 11:50 PM, Brock Palen wrote:
> Sorry for throwing up so many quick questions on the list in a short
> time.
>
> Looking at the manual about locking, the manual states
>
> "The default value of LRU size is 100"
>
> I looked on our login nodes to increase its value, currently lustre
> set lru_size to 32 for the MDS and 1 for 9 of the OST's, 3 for 1 OST,
> 4 for 1 OST and 0 for 3 OST's.
>
> I should note though that all 14 OST's are spread across two OSS,
> both with 16GB of ram (x4500's).
>
> Compared to what the manual says this sounds really small.
> Would this be a sign that we don't have enough memory in our OSS/
> MDS's for our number of clients?
>
> I looked on a few of our clients, many only have 1 lru_size for the
> MDS and 0 for all the OST's.
>
> Am I reading something wrong? Or do we have to set this at start up,
> not let lustre figure it out from clients/ram as stated in the
> manual.
>
> This state worries me because it gives me the felling the cache will
> not function at all because of the lack of available locks. I don't
> want to end up on the wrong end of "can speed up Lustre dramatically".
>
> Thanks.
>
> 633 clients,
> 16 GB MDS/MGS
> 2x16GB OSS's.
>
>
> Brock Palen
> www.umich.edu/~brockp
> Center for Advanced Computing
> brockp at umich.edu
> (734)936-1985
>
>
>
> _______________________________________________
> Lustre-discuss mailing list
> Lustre-discuss at lists.lustre.org
> http://lists.lustre.org/mailman/listinfo/lustre-discuss
>
>
More information about the lustre-discuss
mailing list