[lustre-discuss] LFS tuning hierarchy question

Patrick Farrell pfarrell at whamcloud.com
Thu Jan 24 13:09:53 PST 2019


It varies by value.  If the server has a value set (with lctl set_param -P on the MGS), it will override the client value.  Otherwise you'll get the default value.  (Max pages per RPC is a bit of an exception in that the client and server will negotiate to "highest mutually supported" value for that.)


It's absolutely possible to have different settings for different file system, and you're already doing it if those values are what you're getting from lctl get_param..

________________________________
From: lustre-discuss <lustre-discuss-bounces at lists.lustre.org> on behalf of Ms. Megan Larko <dobsonunit at gmail.com>
Sent: Thursday, January 24, 2019 1:53:19 PM
To: Lustre User Discussion Mailing List
Subject: [lustre-discuss] LFS tuning hierarchy question

Halloo---  People!

I am seeking confirmation of an observed behavior in Lustre.

I have a Lustre client.   This client is running Lustre 2.7.2.  Mounted onto this client I have /mnt/foo (Lustre server 2.7.2) and /mnt/bar (lustre 2.10.4).

Servers for /mnt/foo have max_rpcs_in_flight=8  (the default value)
Servers for /mnt/bar have max_rpcs_in_flight=32

On the Lustre client, the command "lctl get_param mdc.*.max_rpcs_in_flight" show both file systems using max_rpcs_in_flight=8.

Is it correct that the client uses the lowers value for a Lustre tunable presented from a Lustre file system server?   OR...is it the case that the client needs to be tuned so that it may use "up to" the maximum value of the mounted file systems if the specific Lustre server supports that value?

Really I am wondering if it is possible to have, in this case, a "max_rpcs_in_flight" to be 32 for the /mnt/bar Lustre File System while still using a more-limited max_rpcs_in_flight of 8 for /mnt/foo.

TIA,
megan
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20190124/36c32f02/attachment.html>


More information about the lustre-discuss mailing list