[lustre-discuss] [EXTERNAL] Re: No read throughput shown for the sequential read write Filebench workload
Md Hasanur Rashid
mrashid2 at uncc.edu
Wed Oct 13 08:54:40 PDT 2021
Even with clearing the locks (clear all the LDLM locks on the client after
the sync and drop caches commands), the stats look just the same as before.
For extents_stats, the stats look like these:
llite.hasanfs-ffff882fdc929800.extents_stats=
snapshot_time: 1634138089.122870 (secs.usecs)
read | write
extents calls % cum% | calls % cum%
0K - 4K : 450 100 100 | 0 0 0
4K - 8K : 0 0 100 | 0 0 0
8K - 16K : 0 0 100 | 0 0 0
16K - 32K : 0 0 100 | 0 0 0
32K - 64K : 0 0 100 | 0 0 0
64K - 128K : 0 0 100 | 0 0 0
128K - 256K : 0 0 100 | 0 0 0
256K - 512K : 0 0 100 | 0 0 0
512K - 1024K : 0 0 100 | 0 0 0
1M - 2M : 0 0 100 | 1600 46 46
2M - 4M : 0 0 100 | 0 0 46
4M - 8M : 0 0 100 | 0 0 46
8M - 16M : 0 0 100 | 0 0 46
16M - 32M : 0 0 100 | 1826 53 100
If we take a look at the client-side rpc_stats following are observed:
osc.hasanfs-OST0000-osc-ffff882fdc929800.rpc_stats=
snapshot_time: 1634137979.38657 (secs.usecs)
read RPCs in flight: 0
write RPCs in flight: 0
pending write pages: 0
pending read pages: 0
read write
pages per rpc rpcs % cum % | rpcs % cum %
1: 0 0 0 | 0 0 0
2: 0 0 0 | 0 0 0
4: 0 0 0 | 0 0 0
8: 0 0 0 | 0 0 0
16: 0 0 0 | 0 0 0
32: 0 0 0 | 0 0 0
64: 0 0 0 | 0 0 0
128: 0 0 0 | 0 0 0
256: 0 0 0 | 3648 100 100
read write
rpcs in flight rpcs % cum % | rpcs % cum %
0: 0 0 0 | 0 0 0
1: 0 0 0 | 47 1 1
2: 0 0 0 | 47 1 2
3: 0 0 0 | 48 1 3
4: 0 0 0 | 49 1 5
5: 0 0 0 | 50 1 6
6: 0 0 0 | 59 1 8
7: 0 0 0 | 55 1 9
8: 0 0 0 | 68 1 11
9: 0 0 0 | 2209 60 72
10: 0 0 0 | 1015 27 99
11: 0 0 0 | 1 0 100
read write
offset rpcs % cum % | rpcs % cum %
0: 0 0 0 | 66 1 1
1: 0 0 0 | 0 0 1
2: 0 0 0 | 0 0 1
4: 0 0 0 | 0 0 1
8: 0 0 0 | 0 0 1
16: 0 0 0 | 0 0 1
32: 0 0 0 | 0 0 1
64: 0 0 0 | 0 0 1
128: 0 0 0 | 0 0 1
256: 0 0 0 | 66 1 3
512: 0 0 0 | 132 3 7
1024: 0 0 0 | 264 7 14
2048: 0 0 0 | 528 14 28
4096: 0 0 0 | 864 23 52
8192: 0 0 0 | 1728 47 100
osc.hasanfs-OST0001-osc-ffff882fdc929800.rpc_stats=
snapshot_time: 1634137979.39942 (secs.usecs)
read RPCs in flight: 0
write RPCs in flight: 0
pending write pages: 0
pending read pages: 0
read write
pages per rpc rpcs % cum % | rpcs % cum %
1: 61 93 93 | 0 0 0
2: 1 1 95 | 0 0 0
4: 0 0 95 | 0 0 0
8: 0 0 95 | 0 0 0
16: 0 0 95 | 0 0 0
32: 0 0 95 | 0 0 0
64: 0 0 95 | 0 0 0
128: 1 1 96 | 1 0 0
256: 2 3 100 | 3792 99 100
read write
rpcs in flight rpcs % cum % | rpcs % cum %
0: 0 0 0 | 0 0 0
1: 63 96 96 | 45 1 1
2: 2 3 100 | 45 1 2
3: 0 0 100 | 45 1 3
4: 0 0 100 | 45 1 4
5: 0 0 100 | 45 1 5
6: 0 0 100 | 49 1 7
7: 0 0 100 | 47 1 8
8: 0 0 100 | 57 1 9
9: 0 0 100 | 2235 58 68
10: 0 0 100 | 1180 31 100
read write
offset rpcs % cum % | rpcs % cum %
0: 1 1 1 | 69 1 1
1: 1 1 3 | 0 0 1
2: 4 6 9 | 0 0 1
4: 8 12 21 | 0 0 1
8: 13 20 41 | 0 0 1
16: 31 47 89 | 0 0 1
32: 7 10 100 | 0 0 1
64: 0 0 100 | 0 0 1
128: 0 0 100 | 0 0 1
256: 0 0 100 | 69 1 3
512: 0 0 100 | 139 3 7
1024: 0 0 100 | 276 7 14
2048: 0 0 100 | 552 14 29
4096: 0 0 100 | 896 23 52
8192: 0 0 100 | 1792 47 100
osc.hasanfs-OST0002-osc-ffff882fdc929800.rpc_stats=
snapshot_time: 1634137979.41185 (secs.usecs)
read RPCs in flight: 0
write RPCs in flight: 0
pending write pages: 0
pending read pages: 0
read write
pages per rpc rpcs % cum % | rpcs % cum %
1: 0 0 0 | 0 0 0
2: 0 0 0 | 0 0 0
4: 0 0 0 | 0 0 0
8: 0 0 0 | 0 0 0
16: 0 0 0 | 0 0 0
32: 0 0 0 | 0 0 0
64: 0 0 0 | 0 0 0
128: 0 0 0 | 0 0 0
256: 0 0 0 | 3776 100 100
read write
rpcs in flight rpcs % cum % | rpcs % cum %
0: 0 0 0 | 0 0 0
1: 0 0 0 | 43 1 1
2: 0 0 0 | 45 1 2
3: 0 0 0 | 46 1 3
4: 0 0 0 | 46 1 4
5: 0 0 0 | 47 1 6
6: 0 0 0 | 48 1 7
7: 0 0 0 | 48 1 8
8: 0 0 0 | 53 1 9
9: 0 0 0 | 2283 60 70
10: 0 0 0 | 1116 29 99
11: 0 0 0 | 1 0 100
read write
offset rpcs % cum % | rpcs % cum %
0: 0 0 0 | 68 1 1
1: 0 0 0 | 0 0 1
2: 0 0 0 | 0 0 1
4: 0 0 0 | 0 0 1
8: 0 0 0 | 0 0 1
16: 0 0 0 | 0 0 1
32: 0 0 0 | 0 0 1
64: 0 0 0 | 0 0 1
128: 0 0 0 | 0 0 1
256: 0 0 0 | 68 1 3
512: 0 0 0 | 136 3 7
1024: 0 0 0 | 272 7 14
2048: 0 0 0 | 544 14 28
4096: 0 0 0 | 896 23 52
8192: 0 0 0 | 1792 47 100
osc.hasanfs-OST0003-osc-ffff882fdc929800.rpc_stats=
snapshot_time: 1634137979.42422 (secs.usecs)
read RPCs in flight: 0
write RPCs in flight: 0
pending write pages: 0
pending read pages: 0
read write
pages per rpc rpcs % cum % | rpcs % cum %
1: 0 0 0 | 0 0 0
2: 0 0 0 | 0 0 0
4: 0 0 0 | 0 0 0
8: 0 0 0 | 0 0 0
16: 0 0 0 | 0 0 0
32: 0 0 0 | 0 0 0
64: 0 0 0 | 0 0 0
128: 0 0 0 | 0 0 0
256: 0 0 0 | 4048 100 100
read write
rpcs in flight rpcs % cum % | rpcs % cum %
0: 0 0 0 | 0 0 0
1: 0 0 0 | 44 1 1
2: 0 0 0 | 44 1 2
3: 0 0 0 | 44 1 3
4: 0 0 0 | 44 1 4
5: 0 0 0 | 46 1 5
6: 0 0 0 | 46 1 6
7: 0 0 0 | 48 1 7
8: 0 0 0 | 52 1 9
9: 0 0 0 | 2487 61 70
10: 0 0 0 | 1193 29 100
read write
offset rpcs % cum % | rpcs % cum %
0: 0 0 0 | 73 1 1
1: 0 0 0 | 0 0 1
2: 0 0 0 | 0 0 1
4: 0 0 0 | 0 0 1
8: 0 0 0 | 0 0 1
16: 0 0 0 | 0 0 1
32: 0 0 0 | 0 0 1
64: 0 0 0 | 0 0 1
128: 0 0 0 | 0 0 1
256: 0 0 0 | 73 1 3
512: 0 0 0 | 146 3 7
1024: 0 0 0 | 292 7 14
2048: 0 0 0 | 584 14 28
4096: 0 0 0 | 960 23 52
8192: 0 0 0 | 1920 47 100
osc.hasanfs-OST0004-osc-ffff882fdc929800.rpc_stats=
snapshot_time: 1634137979.43647 (secs.usecs)
read RPCs in flight: 0
write RPCs in flight: 0
pending write pages: 0
pending read pages: 0
read write
pages per rpc rpcs % cum % | rpcs % cum %
1: 0 0 0 | 0 0 0
2: 0 0 0 | 0 0 0
4: 0 0 0 | 0 0 0
8: 0 0 0 | 0 0 0
16: 0 0 0 | 0 0 0
32: 0 0 0 | 0 0 0
64: 0 0 0 | 0 0 0
128: 0 0 0 | 0 0 0
256: 0 0 0 | 4048 100 100
read write
rpcs in flight rpcs % cum % | rpcs % cum %
0: 0 0 0 | 0 0 0
1: 0 0 0 | 58 1 1
2: 0 0 0 | 60 1 2
3: 0 0 0 | 60 1 4
4: 0 0 0 | 60 1 5
5: 0 0 0 | 60 1 7
6: 0 0 0 | 61 1 8
7: 0 0 0 | 62 1 10
8: 0 0 0 | 69 1 12
9: 0 0 0 | 2553 63 75
10: 0 0 0 | 1005 24 100
read write
offset rpcs % cum % | rpcs % cum %
0: 0 0 0 | 73 1 1
1: 0 0 0 | 0 0 1
2: 0 0 0 | 0 0 1
4: 0 0 0 | 0 0 1
8: 0 0 0 | 0 0 1
16: 0 0 0 | 0 0 1
32: 0 0 0 | 0 0 1
64: 0 0 0 | 0 0 1
128: 0 0 0 | 0 0 1
256: 0 0 0 | 73 1 3
512: 0 0 0 | 146 3 7
1024: 0 0 0 | 292 7 14
2048: 0 0 0 | 584 14 28
4096: 0 0 0 | 960 23 52
8192: 0 0 0 | 1920 47 100
osc.hasanfs-OST0005-osc-ffff882fdc929800.rpc_stats=
snapshot_time: 1634137979.44994 (secs.usecs)
read RPCs in flight: 0
write RPCs in flight: 0
pending write pages: 0
pending read pages: 0
read write
pages per rpc rpcs % cum % | rpcs % cum %
1: 0 0 0 | 0 0 0
2: 0 0 0 | 0 0 0
4: 0 0 0 | 0 0 0
8: 0 0 0 | 0 0 0
16: 0 0 0 | 0 0 0
32: 0 0 0 | 0 0 0
64: 0 0 0 | 0 0 0
128: 0 0 0 | 0 0 0
256: 0 0 0 | 3840 100 100
read write
rpcs in flight rpcs % cum % | rpcs % cum %
0: 0 0 0 | 0 0 0
1: 0 0 0 | 38 0 0
2: 0 0 0 | 38 0 1
3: 0 0 0 | 38 0 2
4: 0 0 0 | 38 0 3
5: 0 0 0 | 38 0 4
6: 0 0 0 | 39 1 5
7: 0 0 0 | 39 1 6
8: 0 0 0 | 47 1 8
9: 0 0 0 | 2271 59 67
10: 0 0 0 | 1253 32 99
11: 0 0 0 | 1 0 100
read write
offset rpcs % cum % | rpcs % cum %
0: 0 0 0 | 69 1 1
1: 0 0 0 | 0 0 1
2: 0 0 0 | 0 0 1
4: 0 0 0 | 0 0 1
8: 0 0 0 | 0 0 1
16: 0 0 0 | 0 0 1
32: 0 0 0 | 0 0 1
64: 0 0 0 | 0 0 1
128: 0 0 0 | 0 0 1
256: 0 0 0 | 69 1 3
512: 0 0 0 | 138 3 7
1024: 0 0 0 | 276 7 14
2048: 0 0 0 | 552 14 28
4096: 0 0 0 | 912 23 52
8192: 0 0 0 | 1824 47 100
osc.hasanfs-OST0006-osc-ffff882fdc929800.rpc_stats=
snapshot_time: 1634137979.46277 (secs.usecs)
read RPCs in flight: 0
write RPCs in flight: 0
pending write pages: 0
pending read pages: 0
read write
pages per rpc rpcs % cum % | rpcs % cum %
1: 0 0 0 | 0 0 0
2: 0 0 0 | 0 0 0
4: 0 0 0 | 0 0 0
8: 0 0 0 | 0 0 0
16: 0 0 0 | 0 0 0
32: 0 0 0 | 0 0 0
64: 0 0 0 | 1 0 0
128: 0 0 0 | 0 0 0
256: 0 0 0 | 4016 99 100
read write
rpcs in flight rpcs % cum % | rpcs % cum %
0: 0 0 0 | 0 0 0
1: 0 0 0 | 2 0 0
2: 0 0 0 | 3 0 0
3: 0 0 0 | 4 0 0
4: 0 0 0 | 3 0 0
5: 0 0 0 | 4 0 0
6: 0 0 0 | 6 0 0
7: 0 0 0 | 9 0 0
8: 0 0 0 | 72 1 2
9: 0 0 0 | 2083 51 54
10: 0 0 0 | 1768 44 98
11: 0 0 0 | 56 1 99
12: 0 0 0 | 5 0 99
13: 0 0 0 | 1 0 99
14: 0 0 0 | 1 0 100
read write
offset rpcs % cum % | rpcs % cum %
0: 0 0 0 | 74 1 1
1: 0 0 0 | 0 0 1
2: 0 0 0 | 0 0 1
4: 0 0 0 | 0 0 1
8: 0 0 0 | 0 0 1
16: 0 0 0 | 0 0 1
32: 0 0 0 | 0 0 1
64: 0 0 0 | 0 0 1
128: 0 0 0 | 0 0 1
256: 0 0 0 | 74 1 3
512: 0 0 0 | 148 3 7
1024: 0 0 0 | 296 7 14
2048: 0 0 0 | 592 14 29
4096: 0 0 0 | 977 24 53
8192: 0 0 0 | 1856 46 100
osc.hasanfs-OST0007-osc-ffff882fdc929800.rpc_stats=
snapshot_time: 1634137979.47655 (secs.usecs)
read RPCs in flight: 0
write RPCs in flight: 0
pending write pages: 0
pending read pages: 0
read write
pages per rpc rpcs % cum % | rpcs % cum %
1: 0 0 0 | 0 0 0
2: 0 0 0 | 0 0 0
4: 0 0 0 | 0 0 0
8: 0 0 0 | 0 0 0
16: 0 0 0 | 0 0 0
32: 0 0 0 | 0 0 0
64: 0 0 0 | 0 0 0
128: 0 0 0 | 0 0 0
256: 0 0 0 | 3648 100 100
read write
rpcs in flight rpcs % cum % | rpcs % cum %
0: 0 0 0 | 0 0 0
1: 0 0 0 | 7 0 0
2: 0 0 0 | 8 0 0
3: 0 0 0 | 8 0 0
4: 0 0 0 | 9 0 0
5: 0 0 0 | 10 0 1
6: 0 0 0 | 13 0 1
7: 0 0 0 | 34 0 2
8: 0 0 0 | 134 3 6
9: 0 0 0 | 1929 52 58
10: 0 0 0 | 1434 39 98
11: 0 0 0 | 61 1 99
12: 0 0 0 | 1 0 100
read write
offset rpcs % cum % | rpcs % cum %
0: 0 0 0 | 66 1 1
1: 0 0 0 | 0 0 1
2: 0 0 0 | 0 0 1
4: 0 0 0 | 0 0 1
8: 0 0 0 | 0 0 1
16: 0 0 0 | 0 0 1
32: 0 0 0 | 0 0 1
64: 0 0 0 | 0 0 1
128: 0 0 0 | 0 0 1
256: 0 0 0 | 66 1 3
512: 0 0 0 | 132 3 7
1024: 0 0 0 | 264 7 14
2048: 0 0 0 | 528 14 28
4096: 0 0 0 | 864 23 52
8192: 0 0 0 | 1728 47 100
- Md. Hasanur Rashid
On Wed, Oct 13, 2021 at 10:40 AM Patrick Farrell <pfarrell at ddn.com> wrote:
> [*Caution*: Email from External Sender. Do not click or open links or
> attachments unless you know this sender.]
>
> Interesting! Usually that would flush the cache, but sometimes drop
> caches doesn't seem to cause recently written data to be flushed, even
> after sync. (There is probably a minor bug here.) But try this:
>
> After your sync and drop caches commands, clear all the LDLM locks on the
> client:
>
> lctl set_param ldlm.namespaces.*.lru_size=clear
>
> That removes the LDLM locks protecting the data, which should force it out
> of cache. Note that locks aren't cleared if there's dirty data, so sync
> first is essential.
>
> -Patrick
> ------------------------------
> *From:* lustre-discuss <lustre-discuss-bounces at lists.lustre.org> on
> behalf of Md Hasanur Rashid via lustre-discuss <
> lustre-discuss at lists.lustre.org>
> *Sent:* Wednesday, October 13, 2021 7:36 AM
> *To:* lustre-discuss at lists.lustre.org <lustre-discuss at lists.lustre.org>
> *Subject:* [lustre-discuss] No read throughput shown for the sequential
> read write Filebench workload
>
> Hello Everyone,
>
> I am running a Filebench workload which is provided below:
>
> define fileset name="testF",entries=100,filesize=16m,prealloc,path="/mnt/hasanfs/tmp1"
>
> define process name="readerP",instances=2 {
> thread name="readerT",instances=4 {
> flowop openfile name="openOP",filesetname="testF"
> flowop writewholefile name="writeOP",iters=4,filesetname="testF"
> flowop readwholefile name="readOP",iters=1,filesetname="testF"
> flowop closefile name="closeOP"
> }
> }
>
> create files
> system "sync"
> system "echo 3 > /proc/sys/vm/drop_caches"
>
> run 60
>
> I am running the workload in a Lustre cluster. When I check the log from
> the server-side, it shows the following (I have provided stats of one OSS):
>
> obdfilter.hasanfs-OST0000.stats=
> snapshot_time 1633361244.519122 secs.usecs
> read_bytes 1 samples [bytes] 4096 4096 4096
> write_bytes 8479 samples [bytes] 1048576 1048576 8890875904
> destroy 13 samples [reqs]
> statfs 76 samples [reqs]
> preprw 8480 samples [reqs]
> commitrw 8480 samples [reqs]
> ping 57 samples [reqs]
>
> obdfilter.hasanfs-OST0000.brw_stats=
> snapshot_time: 1633361244.519588 (secs.usecs)
>
> read | write
> pages per bulk r/w rpcs % cum % | rpcs % cum %
> 256: 0 0 0 | 8479 100 100
>
> read | write
> discontiguous pages rpcs % cum % | rpcs % cum %
> 0: 0 0 0 | 8479 100 100
>
> read | write
> discontiguous blocks rpcs % cum % | rpcs % cum %
> 0: 0 0 0 | 8479 100 100
>
> read | write
> disk fragmented I/Os ios % cum % | ios % cum %
> 1: 0 0 0 | 7499 88 88
> 2: 0 0 0 | 980 11 100
>
> read | write
> disk I/Os in flight ios % cum % | ios % cum %
> 1: 0 0 0 | 7921 83 83
> 2: 0 0 0 | 1407 14 98
> 3: 0 0 0 | 116 1 99
> 4: 0 0 0 | 15 0 100
>
> read | write
> I/O time (1/1000s) ios % cum % | ios % cum %
> 2: 0 0 0 | 1963 23 23
> 4: 0 0 0 | 5627 66 89
> 8: 0 0 0 | 837 9 99
> 16: 0 0 0 | 25 0 99
> 32: 0 0 0 | 27 0 100
>
> read | write
> disk I/O size ios % cum % | ios % cum %
> 4K: 0 0 0 | 62 0 0
> 8K: 0 0 0 | 127 1 1
> 16K: 0 0 0 | 113 1 3
> 32K: 0 0 0 | 0 0 3
> 64K: 0 0 0 | 0 0 3
> 128K: 0 0 0 | 65 0 3
> 256K: 0 0 0 | 0 0 3
> 512K: 0 0 0 | 1127 11 15
> 1M: 0 0 0 | 7965 84 100
>
> *Can anyone please explain to me why I am not seeing any read operations
> in the stats?*
>
> Thanks,
> Md. Hasanur Rashid
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20211013/044aea6c/attachment-0001.html>
More information about the lustre-discuss
mailing list