[lustre-discuss] permanent configuration: "set_param -P" or "conf_param"

Reinoud Bokhorst rbokhorst at astron.nl
Fri Apr 7 00:30:38 PDT 2017


Thanks for info. Indeed I was able to set the max_pages_per_rpc to 512 
in Lustre 2.7. Still wondering though how I can make that permanent (and 
the checksum)?

When increasing the  max_rpcs_in_flight I guess I also have to increase 
the network peer credits? Are there any guidelines on this?

Cheers Reinoud


On 07-04-17 03:34, lustre-discuss-request at lists.lustre.org wrote:
> Date: Fri, 7 Apr 2017 01:07:17 +0000
> From: "Dilger, Andreas" <andreas.dilger at intel.com>
> To: "Cowe, Malcolm J" <malcolm.j.cowe at intel.com>
> Cc: "lustre-discuss at lists.lustre.org"
> 	<lustre-discuss at lists.lustre.org>
> Subject: Re: [lustre-discuss] permanent configuration: "set_param -P"
> 	or "conf_param"
> Message-ID: <7DB30872-05A0-4F7E-8482-945F1A10EC94 at intel.com>
> Content-Type: text/plain; charset="windows-1252"
>
> Actually, it was 16MB RPCs that landed in 2.9, along with improvements for handling larger RPC sizes (memory usage and such), and server-side support for setting the maximum RPC size per OST.
>
> The 4MB RPC support was included since 2.5 or so, but didn't have the other optimizations.
>
> Cheers, Andreas
>
> On Apr 6, 2017, at 18:03, Cowe, Malcolm J <malcolm.j.cowe at intel.com<mailto:malcolm.j.cowe at intel.com>> wrote:
>
> I am not sure about the checksums value: I see the same behaviour on my system. It may be a failsafe against permanently disabling checksums, since there is a risk of data corruption.
>
> For max_pages_per_rpc, setting the RPC size larger than 1MB (256 pages) is only available in Lustre versions 2.9.0 and newer, or in Intel?s EE Lustre version 3.1. Also, to make this work, one must also adjust the brw_size parameter on the OSTs to match the RPC size. The Lustre manual provides documentation on the feature:
>
> https://build.hpdd.intel.com/job/lustre-manual/lastSuccessfulBuild/artifact/lustre_manual.xhtml#idm139670075738896
>
> Here are some rough notes I have to hand:
>
> To check the current settings of the brw_size attribute, log into each OSS and run the following command:
> lctl get_param obdfilter.*.brw_size
>
> For example:
> [root at ct66-oss1 ~]# lctl get_param obdfilter.*.brw_size
> obdfilter.demo-OST0000.brw_size=1
> obdfilter.demo-OST0004.brw_size=1
>
> The value returned is measured in MB.
>
> To change the setting temporarily on an OSS server:
>
> lctl set_param obdfilter.*.brw_size=<n>
>
> where <n> is an integer value between 1 and 16. Again, the value is a measurement in MB. To set brw_size persistently, login to the MGS and as root use the following syntax:
>
> lctl set_param -P obdfilter.*.brw_size=<n>
>
> This will set the value for all OSTs across all file systems registered with the MGS. To scope the settings to an individual file system, change the filter expression to include the file system name:
>
> lctl set_param -P obdfilter.<fsname>-*.brw_size=<n>
>
> To temporarily change the value of max_pages_per_rpc, use the following command on each client:
>
> lctl set_param osc.*.max_pages_per_rpc=<n>
>
> for example, to set max_pages_per_rpc to 1024 (4M):
>
> lctl set_param osc.*.max_pages_per_rpc=1024
>
> To make the setting persistent, log into the MGS server and run the lctl set_param command using the -P flag:
>
> lctl set_param -P osc.*.max_pages_per_rpc=<n>
>
> Again, the scope can be reined by changing the pattern to match the file system name:
>
> lctl set_param -P osc.<fsname>-*.max_pages_per_rpc=<n>
>
> For example:
> lctl set_param -P osc.demo-*.max_pages_per_rpc=1024
>
> Note that I have found that if the brw_size is changed you may have to re-mount the clients before you?ll be able to set max_pages_per_rpc > 256.
>
>
> Malcolm Cowe
> High Performance Data Division
>
> Intel Corporation | www.intel.com<http://www.intel.com>
>
>
> From: lustre-discuss <lustre-discuss-bounces at lists.lustre.org<mailto:lustre-discuss-bounces at lists.lustre.org>> on behalf of Reinoud Bokhorst <rbokhorst at astron.nl<mailto:rbokhorst at astron.nl>>
> Date: Friday, 7 April 2017 at 1:31 am
> To: Lustre discussion <lustre-discuss at lists.lustre.org<mailto:lustre-discuss at lists.lustre.org>>
> Subject: [lustre-discuss] permanent configuration: "set_param -P" or "conf_param"
>
>
> Hi all,
> Two days ago I made the following Lustre configuration changes:
>
> lctl set_param -P osc.*.checksums=0
> lctl set_param -P osc.*.max_pages_per_rpc=512
> lctl set_param -P osc.*.max_rpcs_in_flight=32
> lctl set_param -P osc.*.max_dirty_mb=128
>
> I ran these commands on the MGS. The -P flag promised to make a permanent change and doing this on the MGS would make it system-wide. Indeed directly after running the commands, I noticed that the settings were nicely propagated to other nodes.
>
> When I look now, only "max_rpcs_in_flight" and "max_dirty_mb" still have those values, the others are back to their defaults, namely checksums=1 and max_pages_per_rpc=256. The compute nodes have been rebooted in the mean time.
>
> Two questions:
> - Why were the settings of checksums and max_pages_per_rpc lost? (I suspect during the reboot)
> - What is the proper way to make these changes permanent? Should I use "lctl conf_param"?
>
> Our lustre version:
>
> # lctl get_param version
> version=
> lustre: 2.7.0
> kernel: patchless_client
> build:  2.7.0-RC4--PRISTINE-3.10.0-327.36.3.el7.x86_64
>
> Thanks,
> Reinoud Bokhorst
>
> _______________________________________________
> lustre-discuss mailing list
> lustre-discuss at lists.lustre.org<mailto:lustre-discuss at lists.lustre.org>
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20170407/3399de4a/attachment-0001.htm>
>
> ------------------------------
>


More information about the lustre-discuss mailing list