[Lustre-discuss] lustre ram usage (contd)
Balagopal Pillai
pillai at mathstat.dal.ca
Sun Dec 23 14:01:54 PST 2007
Hi,
The cluster is made idle on the weekend to look at the Lustre
ram consumpton issue. The ram used during yesterday's rsync is still not
freed up. Here is the output from free
total used free shared buffers cached
Mem: 4041880 3958744 83136 0 876132 144276
-/+ buffers/cache: 2938336 1103544
Swap: 4096564 240 4096324
Looking at vmstat -m, there is something odd. Seems like
ext3_inode_cache and dentry_cache seems to be the biggest occupants of
ram. ldiskfs_inode_cache comparatively smaller.
-
Cache Num Total Size Pages
ll_fmd_cache 0 0 56 69
osc_quota_info 0 0 32 119
lustre_dquot_cache 0 0 144 27
fsfilt_ldiskfs_fcb 0 0 56 69
ldiskfs_inode_cache 430199 440044 920 4
ldiskfs_xattr 0 0 88 45
ldiskfs_prealloc_space 14 38 104 38
ll_file_data 0 0 128 31
lustre_inode_cache 0 0 896 4
lov_oinfo 0 0 256 15
ll_qunit_cache 0 0 72 54
ldlm_locks 10509 12005 512 7
ldlm_resources 10291 11325 256 15
ll_import_cache 0 0 440 9
ll_obdo_cache 0 0 208 19
ll_obd_dev_cache 40 40 5328 1
fib6_nodes 11 61 64 61
ip6_dst_cache 16 24 320 12
ndisc_cache 1 15 256 15
rawv6_sock 10 12 1024 4
udpv6_sock 1 4 1024 4
tcpv6_sock 3 4 1728 4
rpc_buffers 8 8 2048 2
rpc_tasks 8 12 320 12
rpc_inode_cache 6 8 832 4
msi_cache 4 4 5760 1
ip_fib_alias 10 119 32 119
ip_fib_hash 10 61 64 61
dm_tio 0 0 24 156
dm_io 0 0 40 96
dm-bvec-(256) 0 0 4096 1
dm-bvec-128 0 0 2048 2
dm-bvec-64 0 0 1024 4
dm-bvec-16 0 0 256 15
dm-bvec-4 0 0 64 61
Cache Num Total Size Pages
dm-bvec-1 0 0 16 225
dm-bio 0 0 128 31
uhci_urb_priv 2 45 88 45
ext3_inode_cache 1636505 1636556 856 4
ext3_xattr 0 0 88 45
journal_handle 8 81 48 81
journal_head 460 855 88 45
revoke_table 38 225 16 225
revoke_record 0 0 32 119
scsi_cmd_cache 2 14 512 7
unix_sock 105 155 768 5
ip_mrt_cache 0 0 128 31
tcp_tw_bucket 0 0 192 20
tcp_bind_bucket 14 238 32 119
tcp_open_request 0 0 128 31
inet_peer_cache 0 0 128 31
secpath_cache 0 0 192 20
xfrm_dst_cache 0 0 384 10
ip_dst_cache 40 80 384 10
arp_cache 16 30 256 15
raw_sock 9 9 832 9
udp_sock 14 45 832 9
tcp_sock 56 60 1536 5
flow_cache 0 0 128 31
mqueue_inode_cache 1 4 896 4
relayfs_inode_cache 0 0 592 13
isofs_inode_cache 0 0 632 6
hugetlbfs_inode_cache 1 6 624 6
ext2_inode_cache 0 0 752 5
ext2_xattr 0 0 88 45
dquot 0 0 224 17
eventpoll_pwq 3 54 72 54
eventpoll_epi 3 20 192 20
kioctx 0 0 384 10
kiocb 0 0 256 15
Cache Num Total Size Pages
dnotify_cache 2 96 40 96
fasync_cache 1 156 24 156
shmem_inode_cache 376 405 816 5
posix_timers_cache 0 0 184 21
uid_cache 5 62 128 31
sgpool-256 32 32 8192 1
sgpool-128 32 32 4096 1
sgpool-64 32 32 2048 2
sgpool-32 32 32 1024 4
sgpool-16 32 32 512 8
sgpool-8 32 45 256 15
cfq_pool 66 207 56 69
crq_pool 64 324 72 54
deadline_drq 0 0 96 41
as_arq 0 0 112 35
blkdev_ioc 364 476 32 119
blkdev_queue 33 81 856 9
blkdev_requests 64 120 264 15
biovec-(256) 256 256 4096 1
biovec-128 256 256 2048 2
biovec-64 256 256 1024 4
biovec-16 256 270 256 15
biovec-4 256 305 64 61
biovec-1 256 450 16 225
bio 256 279 128 31
file_lock_cache 3 75 160 25
sock_inode_cache 209 220 704 5
skbuff_head_cache 16443 22008 320 12
sock 6 12 640 6
proc_inode_cache 2670 2670 616 6
sigqueue 40 230 168 23
radix_tree_node 68531 68880 536 7
bdev_cache 45 60 832 4
mnt_cache 60 80 192 20
inode_cache 927 1176 584 7
Cache Num Total Size Pages
dentry_cache 1349923 1361216 240 16
filp 717 924 320 12
names_cache 3 3 4096 1
avc_node 12 648 72 54
key_jar 10 60 192 20
idr_layer_cache 110 133 528 7
buffer_head 230970 393300 88 45
mm_struct 47 105 1152 7
vm_area_struct 1573 2904 176 22
fs_cache 422 549 64 61
files_cache 58 171 832 9
signal_cache 529 585 256 15
sighand_cache 522 528 2112 3
task_struct 550 554 2000 2
anon_vma 601 1404 24 156
shared_policy_node 0 0 56 69
numa_policy 82 675 16 225
size-131072(DMA) 0 0 131072 1
size-131072 12 12 131072 1
size-65536(DMA) 0 0 65536 1
size-65536 205 205 65536 1
size-32768(DMA) 0 0 32768 1
size-32768 0 0 32768 1
size-16384(DMA) 0 0 16384 1
size-16384 936 936 16384 1
size-8192(DMA) 0 0 8192 1
size-8192 4911 4911 8192 1
size-4096(DMA) 0 0 4096 1
size-4096 676 676 4096 1
size-2048(DMA) 0 0 2048 2
size-2048 8753 8782 2048 2
size-1620(DMA) 0 0 1664 4
size-1620 86 104 1664 4
size-1024(DMA) 0 0 1024 4
size-1024 15228 15900 1024 4
Cache Num Total Size Pages
size-512(DMA) 0 0 512 8
size-512 1189 2752 512 8
size-256(DMA) 0 0 256 15
size-256 10235 10560 256 15
size-128(DMA) 0 0 128 31
size-128 200934 211916 128 31
size-64(DMA) 0 0 64 61
size-64 712970 735416 64 61
size-32(DMA) 0 0 32 119
size-32 2338 94486 32 119
kmem_cache 210 210 256 15
On the second OSS, here is the vmstat output -
Again dentry_cache and ldiskfs_inode_cache and ext3_inode_cache
seems to be the biggest users of ram.
ll_fmd_cache 0 0 56 69
ldiskfs_inode_cache 987664 987668 920 4
lustre_inode_cache 0 0 896 4
ll_qunit_cache 0 0 72 54
ll_import_cache 0 0 440 9
ll_obdo_cache 0 0 208 19
ll_obd_dev_cache 10 10 5328 1
ip6_dst_cache 16 24 320 12
ndisc_cache 1 15 256 15
rpc_inode_cache 6 8 832 4
msi_cache 4 4 5760 1
ext3_inode_cache 392316 392328 856 4
scsi_cmd_cache 41 42 512 7
ip_mrt_cache 0 0 128 31
inet_peer_cache 0 0 128 31
secpath_cache 0 0 192 20
xfrm_dst_cache 0 0 384 10
ip_dst_cache 39 80 384 10
arp_cache 16 30 256 15
flow_cache 0 0 128 31
mqueue_inode_cache 1 4 896 4
relayfs_inode_cache 0 0 592 13
isofs_inode_cache 0 0 632 6
hugetlbfs_inode_cache 1 6 624 6
ext2_inode_cache 0 0 752 5
dnotify_cache 2 96 40 96
fasync_cache 1 156 24 156
shmem_inode_cache 370 400 816 5
posix_timers_cache 0 0 184 21
uid_cache 7 31 128 31
file_lock_cache 7 75 160 25
sock_inode_cache 216 235 704 5
skbuff_head_cache 16500 21768 320 12
proc_inode_cache 2260 2262 616 6
bdev_cache 56 56 832 4
mnt_cache 46 60 192 20
inode_cache 944 1218 584 7
dentry_cache 1387440 1387440 240 16
names_cache 10 10 4096 1
idr_layer_cache 91 98 528 7
fs_cache 366 549 64 61
files_cache 69 153 832 9
signal_cache 462 585 256 15
sighand_cache 453 453 2112 3
kmem_cache 180 180 256 15
Is there a way to flush out the cache so that the ram is
freed up? The same issue is reported here -
http://lkml.org/lkml/2006/8/3/376 But both OSS run CentOS 4 and 2.6.9
kernel, so drop_caches doesn't seem to be available in /proc.
Is there anything in proc as explained in
http://www.redhat.com/docs/manuals/enterprise/RHEL-4-Manual/ref-guide/s1-proc-directories.html
that can force the kernel to flush out the dentry_cache and
ext3_inode_cache when the rsync is over and cache is not needed anymore?
Thanks very much.
Regards
Balagopal
More information about the lustre-discuss
mailing list