[lustre-discuss] Lustre client memory and MemoryAvailable

Patrick Farrell pfarrell at whamcloud.com
Sun Apr 14 06:52:38 PDT 2019


echo 1 > drop_caches does not generate memory pressure - it requests that the page cache be cleared.  It would not be expected to affect slab caches much.

You could try 3 (1+2 in this case, where 2 is inode and dentry).  That might do a bit more because some (maybe many?) of those objects you're looking at would go away if the associated inodes or dentries were removed.  But fundamentally, drop caches does not generate memory pressure, and does not force reclaim.  It drops specific, identified caches.

The only way to force *reclaim* is memory pressure.

Your note that a lot more memory than expected was freed under pressure does tell us something, though.

It's conceivable Lustre needs to set SLAB_RECLAIM_ACCOUNT on more of its slab caches, so this piqued my curiosity.  My conclusion is no, here's why:

The one quality reference I was quickly able to find suggests setting SLAB_RECLAIM_ACCOUNT wouldn't be so simple:
https://lwn.net/Articles/713076/

GFP_TEMPORARY is - in practice - just another name for __GFP_RECLAIMABLE, and setting SLAB_RECLAIM_ACCOUNT is equivalent to setting __GFP_RECLAIMABLE.  That article suggests caution is needed, as this should only be used for memory that is certain to be easily available, because using this flag changes the allocation behavior on the assumption that the memory can be quickly freed at need.  That is often not true of these Lustre objects.

An easy way to learn more about this sort of question is to compare to other actively developed file systems in the kernel...

Comparing to other file systems, we see that in general, only the inode cache is allocated with SLAB_RECLAIM_ACCOUNT (it varies a bit).

XFS, for example, has only one use of KM_ZONE_RECLAIM, its name for this flag - the inode cache:
"
        xfs_inode_zone =
                kmem_zone_init_flags(sizeof(xfs_inode_t), "xfs_inode",
                        KM_ZONE_HWALIGN | KM_ZONE_RECLAIM | KM_ZONE_SPREAD,
                        xfs_fs_inode_init_once);
"

btrfs is the same, just the inode cache.  EXT4 has a *few* more caches marked this way, but not everything.

So, no - I don't think so.  It would be atypical for Lustre to set SLAB_RECLAIM_ACCOUNT on its slab caches for internal objects.  Presumably this sort of thing is not considered reclaimable enough for this purpose.

I believe if you tried similar tests with other complex file systems (XFS might be a good start), you'd see broadly similar behavior.  (Lustre is probably a bit worse because it has a more complex internal object model, so more slab caches.)

VM accounting is distinctly imperfect.  The design is such that it's often impossible to know how much memory could be made available without actually going and trying to free it.  There are good, intrinsic reasons for some of that, and some of that is design artifacts...

I've copied in Neil Brown, who I think only reads lustre-devel, just in case he has some particular input on this.

Regards,
- Patrick
________________________________
From: lustre-discuss <lustre-discuss-bounces at lists.lustre.org> on behalf of Jacek Tomaka <jacekt at dug.com>
Sent: Sunday, April 14, 2019 3:12:51 AM
To: lustre-discuss at lists.lustre.org
Subject: Re: [lustre-discuss] Lustre client memory and MemoryAvailable

Actually i think it is just a bug with the way slab caches are created. Some of them should be passed a flag that they are reclaimable.
i.e. something like:
https://patchwork.kernel.org/patch/9360819/

Regards.
Jacek Tomaka

On Sun, Apr 14, 2019 at 3:27 PM Jacek Tomaka <jacekt at dug.com<mailto:jacekt at dug.com>> wrote:
Hello,

TL;DR;
Is there a way to figure out how much memory Lustre will make available under memory pressure?

Details:
We are running lustre client on a machine with 128GB of memory (Centos 7) Intel Phi KNL machines and at certain situations we see that there can be ~10GB+ of memory allocated on the kernel side i.e. :

vvp_object_kmem   3535336 3536986    176   46    2 : tunables    0    0    0 : slabdata  76891  76891      0
ll_thread_kmem     33511  33511    344   47    4 : tunables    0    0    0 : slabdata    713    713      0
lov_session_kmem   34760  34760    592   55    8 : tunables    0    0    0 : slabdata    632    632      0
osc_extent_kmem   3549831 3551232    168   48    2 : tunables    0    0    0 : slabdata  73984  73984      0
osc_thread_kmem    14012  14116   2832   11    8 : tunables    0    0    0 : slabdata   1286   1286      0
osc_object_kmem   3546640 3548350    304   53    4 : tunables    0    0    0 : slabdata  66950  66950      0
signal_cache      3702537 3707144   1152   28    8 : tunables    0    0    0 : slabdata 132398 132398      0

/proc/meminfo:
MemAvailable:   114196044 kB
Slab:           11641808 kB
SReclaimable:    1410732 kB
SUnreclaim:     10231076 kB

After executing

echo 1 >/proc/sys/vm/drop_caches

the slabinfo values don't change but when i actually generate memory pressure by:

java -Xmx117G -Xms117G -XX:+AlwaysPreTouch -version

lots of memory gets freed:
vvp_object_kmem   127650 127880    176   46    2 : tunables    0    0    0 : slabdata   2780   2780      0
ll_thread_kmem     33558  33558    344   47    4 : tunables    0    0    0 : slabdata    714    714      0
lov_session_kmem   34815  34815    592   55    8 : tunables    0    0    0 : slabdata    633    633      0
osc_extent_kmem   128640 128880    168   48    2 : tunables    0    0    0 : slabdata   2685   2685      0
osc_thread_kmem    14038  14116   2832   11    8 : tunables    0    0    0 : slabdata   1286   1286      0
osc_object_kmem    82998  83263    304   53    4 : tunables    0    0    0 : slabdata   1571   1571      0
signal_cache       38734  44268   1152   28    8 : tunables    0    0    0 : slabdata   1581   1581      0

/proc/meminfo:
MemAvailable:   123146076 kB
Slab:            1959160 kB
SReclaimable:     334276 kB
SUnreclaim:      1624884 kB

The similar effect to generating memory pressure we see when executing:

echo 3 >/proc/sys/vm/drop_caches

But this can take very long time (10 minutes).

So essentially on a machine using Lustre client,  MemAvailable is no longer a good predictor of the amount of memory that can be allocated.
Is there a way to query Lustre and compensate for lustre cache memory that will be made available on memory pressure?

Regards.
--
Jacek Tomaka
Geophysical Software Developer


[http://drive.google.com/uc?export=view&id=0B4X9ixpc-ZU_NHV0WnluaXp5ZkE]

DownUnder GeoSolutions

76 Kings Park Road
West Perth 6005 WA, Australia
tel +61 8 9287 4143<tel:+61%208%209287%204143>
jacekt at dug.com<mailto:jacekt at dug.com>
www.dug.com<http://www.dug.com>


--
Jacek Tomaka
Geophysical Software Developer


[http://drive.google.com/uc?export=view&id=0B4X9ixpc-ZU_NHV0WnluaXp5ZkE]

DownUnder GeoSolutions

76 Kings Park Road
West Perth 6005 WA, Australia
tel +61 8 9287 4143<tel:+61%208%209287%204143>
jacekt at dug.com<mailto:jacekt at dug.com>
www.dug.com<http://www.dug.com>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20190414/8e0e6f7c/attachment-0001.html>


More information about the lustre-discuss mailing list