[lustre-discuss] nodemap switching fails

Sebastien Buisson sbuisson at ddn.com
Mon May 24 23:58:55 PDT 2021


Hi Thomas,

The phenomenon you observe is likely due to a caching effect on client side. If you clear the cache with the following command on your clients after the nodemap definitions have been updated, you should be able to see what you expect:
client# lctl set_param ldlm.namespaces.$FSNAME-*.lru_size=clear

Cheers,
Sebastien.

> Le 21 mai 2021 à 20:08, Thomas Roth <t.roth at gsi.de> a écrit :
> 
> 
> Hi all,
> 
> Lustre 2.12.6 here.
> 
> Following the manual ' 28. Mapping UIDs and GIDs with Nodemap'
> 
> This system has 4 clients.
> I set up an Admin nodemap which has
> - admin=1
> - trusted=1
> - deny_unknown=0
> 
> Two of the clients are added to this map
> - lctl nodemap_add_range --name Admin --range 10.20.3.[63-64]@o2ib6
> 
> nodemap is switched on
> - lctl nodemap_activate 1
> 
> 
> ->The two nodes have full access, the other two (their nids are not mentioned anywhere, fall into the default nodemap) show root squashed files/directories. Very nice.
> 
> 
> 
> nodemap is switched off
> - lctl nodemap_activate 0
> 
> Nothing changes (within 5 minutes or so). I would expect all four nodes to have equal access to the file system, but the two non-privileged nodes remain so.
> Unless I umount and mount again (did that on client 3)
> 
> Switching nodemap back on, my client 3 remains privileged - no root squash - until I umount an mount again.
> 
> 
> This is not the way this feature is intended to work, correct?
> 
> 
> (This entire Lustre system is an unused test system. Other than my own 'ls -l' on the mountpoint, there is no activity at all.)
> 
> Regards,
> Thomas
> _______________________________________________
> lustre-discuss mailing list
> lustre-discuss at lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org



More information about the lustre-discuss mailing list