[Lustre-discuss] [HPDD-discuss] root squash problem
Cowe, Malcolm J
malcolm.j.cowe at intel.com
Sun Jul 21 15:37:24 PDT 2013
>From the Ops Manual (and hence not from direct experience), setting the nosquash_nids on the MGS will affect all MDTs -- it is a global setting when applied using conf_param on the MGS. In which case, the command will return an error when you specify an individual MDT using conf_param on the MGS.
Instead, specify the file system that one wants to apply the squash rule to:
lctl conf_param <fsname>.mdt.nosquash_nids="<nids>"
e.g.:
lctl conf_param umt3.mdt.nosquash_nids="10.10.2.33 at tcp"
To set this per MDT, use mkfs.lustre or tunefs.lustre (refer to the Lustre Operations Manual, section 22.2).
Regards,
Malcolm.
> -----Original Message-----
> From: hpdd-discuss-bounces at lists.01.org [mailto:hpdd-discuss-
> bounces at lists.01.org] On Behalf Of Bob Ball
> Sent: Saturday, July 20, 2013 12:35 AM
> To: hpdd-discuss at lists.01.org; Lustre discussion
> Subject: [HPDD-discuss] root squash problem
>
> We have just installed Lustre 2.1.6 on SL6.4 systems. It is working
> well. However, I find that I am unable to apply root squash parameters.
>
> We have separate mgs and mdt machines. Under Lustre 1.8.4 this was
> not
> an issue for root squash commands applied on the mdt. However, when
> I
> modify the command syntax for lctl conf_param to what I think should
> now
> be appropriate, I run into difficulty.
>
> [root at lmd02 tools]# lctl conf_param
> mdt.umt3-MDT0000.nosquash_nids="10.10.2.33 at tcp"
> No device found for name MGS: Invalid argument
> This command must be run on the MGS.
> error: conf_param: No such device
>
> [root at mgs ~]# lctl conf_param
> mdt.umt3-MDT0000.nosquash_nids="10.10.2.33 at tcp"
> error: conf_param: Invalid argument
>
> I have not yet looked at setting the "root_squash" value, as this
> problem has stopped me cold. So, two questions:
>
> 1. Is this even possible with our split mgs/mdt machines?
> 2. If possible, what have I done wrong above?
>
> Thanks,
> bob
>
> _______________________________________________
> HPDD-discuss mailing list
> HPDD-discuss at lists.01.org
> https://lists.01.org/mailman/listinfo/hpdd-discuss
More information about the lustre-discuss
mailing list