[lustre-discuss] permanent configuration: "set_param -P" or "conf_param"

Cowe, Malcolm J malcolm.j.cowe at intel.com
Thu Apr 6 17:02:50 PDT 2017


I am not sure about the checksums value: I see the same behaviour on my system. It may be a failsafe against permanently disabling checksums, since there is a risk of data corruption.

For max_pages_per_rpc, setting the RPC size larger than 1MB (256 pages) is only available in Lustre versions 2.9.0 and newer, or in Intel’s EE Lustre version 3.1. Also, to make this work, one must also adjust the brw_size parameter on the OSTs to match the RPC size. The Lustre manual provides documentation on the feature:

https://build.hpdd.intel.com/job/lustre-manual/lastSuccessfulBuild/artifact/lustre_manual.xhtml#idm139670075738896

Here are some rough notes I have to hand:

To check the current settings of the brw_size attribute, log into each OSS and run the following command:
lctl get_param obdfilter.*.brw_size

For example:
[root at ct66-oss1 ~]# lctl get_param obdfilter.*.brw_size
obdfilter.demo-OST0000.brw_size=1
obdfilter.demo-OST0004.brw_size=1

The value returned is measured in MB.

To change the setting temporarily on an OSS server:

lctl set_param obdfilter.*.brw_size=<n>

where <n> is an integer value between 1 and 16. Again, the value is a measurement in MB. To set brw_size persistently, login to the MGS and as root use the following syntax:

lctl set_param -P obdfilter.*.brw_size=<n>

This will set the value for all OSTs across all file systems registered with the MGS. To scope the settings to an individual file system, change the filter expression to include the file system name:

lctl set_param -P obdfilter.<fsname>-*.brw_size=<n>

To temporarily change the value of max_pages_per_rpc, use the following command on each client:

lctl set_param osc.*.max_pages_per_rpc=<n>

for example, to set max_pages_per_rpc to 1024 (4M):

lctl set_param osc.*.max_pages_per_rpc=1024

To make the setting persistent, log into the MGS server and run the lctl set_param command using the -P flag:

lctl set_param -P osc.*.max_pages_per_rpc=<n>

Again, the scope can be reined by changing the pattern to match the file system name:

lctl set_param -P osc.<fsname>-*.max_pages_per_rpc=<n>

For example:
lctl set_param -P osc.demo-*.max_pages_per_rpc=1024

Note that I have found that if the brw_size is changed you may have to re-mount the clients before you’ll be able to set max_pages_per_rpc > 256.


Malcolm Cowe
High Performance Data Division

Intel Corporation | www.intel.com


From: lustre-discuss <lustre-discuss-bounces at lists.lustre.org> on behalf of Reinoud Bokhorst <rbokhorst at astron.nl>
Date: Friday, 7 April 2017 at 1:31 am
To: Lustre discussion <lustre-discuss at lists.lustre.org>
Subject: [lustre-discuss] permanent configuration: "set_param -P" or "conf_param"


Hi all,
Two days ago I made the following Lustre configuration changes:

lctl set_param -P osc.*.checksums=0
lctl set_param -P osc.*.max_pages_per_rpc=512
lctl set_param -P osc.*.max_rpcs_in_flight=32
lctl set_param -P osc.*.max_dirty_mb=128

I ran these commands on the MGS. The -P flag promised to make a permanent change and doing this on the MGS would make it system-wide. Indeed directly after running the commands, I noticed that the settings were nicely propagated to other nodes.

When I look now, only "max_rpcs_in_flight" and "max_dirty_mb" still have those values, the others are back to their defaults, namely checksums=1 and max_pages_per_rpc=256. The compute nodes have been rebooted in the mean time.

Two questions:
- Why were the settings of checksums and max_pages_per_rpc lost? (I suspect during the reboot)
- What is the proper way to make these changes permanent? Should I use "lctl conf_param"?

Our lustre version:

# lctl get_param version
version=
lustre: 2.7.0
kernel: patchless_client
build:  2.7.0-RC4--PRISTINE-3.10.0-327.36.3.el7.x86_64

Thanks,
Reinoud Bokhorst
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20170407/0ae4bf42/attachment.htm>


More information about the lustre-discuss mailing list