<div dir="ltr"><div class="gmail_default" style="font-family:verdana,sans-serif"><p style="box-sizing:border-box;margin-bottom:16px;color:rgb(36,41,47);font-family:-apple-system,system-ui,"Segoe UI",Helvetica,Arial,sans-serif,"Apple Color Emoji","Segoe UI Emoji";font-size:14px;margin-top:0px"><span style="box-sizing:border-box;font-weight:600">Even with clearing the locks (clear all the LDLM locks on the client after the sync and drop caches commands), the stats look just the same as before. </span></p><p style="box-sizing:border-box;margin-top:0px;margin-bottom:16px;color:rgb(36,41,47);font-family:-apple-system,system-ui,"Segoe UI",Helvetica,Arial,sans-serif,"Apple Color Emoji","Segoe UI Emoji";font-size:14px"><span style="box-sizing:border-box;font-weight:600">For extents_stats, the stats look like these:</span></p><p style="box-sizing:border-box;margin-bottom:16px;color:rgb(36,41,47);font-family:-apple-system,system-ui,"Segoe UI",Helvetica,Arial,sans-serif,"Apple Color Emoji","Segoe UI Emoji";font-size:14px;margin-top:0px"><span style="box-sizing:border-box;font-weight:600"></span></p><div class="gmail-snippet-clipboard-content gmail-position-relative gmail-overflow-auto" style="box-sizing:border-box;color:rgb(36,41,47);font-family:-apple-system,system-ui,"Segoe UI",Helvetica,Arial,sans-serif,"Apple Color Emoji","Segoe UI Emoji";overflow:auto;margin-bottom:0px"><pre style="font-size:11.9px;box-sizing:border-box;font-family:ui-monospace,SFMono-Regular,"SF Mono",Menlo,Consolas,"Liberation Mono",monospace;margin-top:0px;margin-bottom:16px;padding:16px;overflow:auto;line-height:1.45;border-radius:6px"><code style="box-sizing:border-box;font-family:ui-monospace,SFMono-Regular,"SF Mono",Menlo,Consolas,"Liberation Mono",monospace;font-size:11.9px;padding:0px;margin:0px;background:transparent;border-radius:6px;word-break:normal;border:0px;display:inline;overflow:visible;line-height:inherit">llite.hasanfs-ffff882fdc929800.extents_stats=
snapshot_time: 1634138089.122870 (secs.usecs)
read | write
extents calls % cum% | calls % cum%
0K - 4K : 450 100 100 | 0 0 0
4K - 8K : 0 0 100 | 0 0 0
8K - 16K : 0 0 100 | 0 0 0
16K - 32K : 0 0 100 | 0 0 0
32K - 64K : 0 0 100 | 0 0 0
64K - 128K : 0 0 100 | 0 0 0
128K - 256K : 0 0 100 | 0 0 0
256K - 512K : 0 0 100 | 0 0 0
512K - 1024K : 0 0 100 | 0 0 0
1M - 2M : 0 0 100 | 1600 46 46
2M - 4M : 0 0 100 | 0 0 46
4M - 8M : 0 0 100 | 0 0 46
8M - 16M : 0 0 100 | 0 0 46
16M - 32M : 0 0 100 | 1826 53 100</code></pre></div><p style="box-sizing:border-box;margin-bottom:16px;color:rgb(36,41,47);font-family:-apple-system,system-ui,"Segoe UI",Helvetica,Arial,sans-serif,"Apple Color Emoji","Segoe UI Emoji";font-size:14px;margin-top:0px"><span style="box-sizing:border-box;font-weight:600">If we take a look at the client-side rpc_stats following are observed:</span></p><div class="gmail-snippet-clipboard-content gmail-position-relative gmail-overflow-auto" style="box-sizing:border-box;color:rgb(36,41,47);font-family:-apple-system,system-ui,"Segoe UI",Helvetica,Arial,sans-serif,"Apple Color Emoji","Segoe UI Emoji";font-size:14px;overflow:auto"><pre style="box-sizing:border-box;font-family:ui-monospace,SFMono-Regular,"SF Mono",Menlo,Consolas,"Liberation Mono",monospace;font-size:11.9px;margin-top:0px;margin-bottom:16px;padding:16px;overflow:auto;line-height:1.45;border-radius:6px"><code style="box-sizing:border-box;font-family:ui-monospace,SFMono-Regular,"SF Mono",Menlo,Consolas,"Liberation Mono",monospace;font-size:11.9px;padding:0px;margin:0px;background:transparent;border-radius:6px;word-break:normal;border:0px;display:inline;overflow:visible;line-height:inherit">osc.hasanfs-OST0000-osc-ffff882fdc929800.rpc_stats=
snapshot_time: 1634137979.38657 (secs.usecs)
read RPCs in flight: 0
write RPCs in flight: 0
pending write pages: 0
pending read pages: 0
read write
pages per rpc rpcs % cum % | rpcs % cum %
1: 0 0 0 | 0 0 0
2: 0 0 0 | 0 0 0
4: 0 0 0 | 0 0 0
8: 0 0 0 | 0 0 0
16: 0 0 0 | 0 0 0
32: 0 0 0 | 0 0 0
64: 0 0 0 | 0 0 0
128: 0 0 0 | 0 0 0
256: 0 0 0 | 3648 100 100
read write
rpcs in flight rpcs % cum % | rpcs % cum %
0: 0 0 0 | 0 0 0
1: 0 0 0 | 47 1 1
2: 0 0 0 | 47 1 2
3: 0 0 0 | 48 1 3
4: 0 0 0 | 49 1 5
5: 0 0 0 | 50 1 6
6: 0 0 0 | 59 1 8
7: 0 0 0 | 55 1 9
8: 0 0 0 | 68 1 11
9: 0 0 0 | 2209 60 72
10: 0 0 0 | 1015 27 99
11: 0 0 0 | 1 0 100
read write
offset rpcs % cum % | rpcs % cum %
0: 0 0 0 | 66 1 1
1: 0 0 0 | 0 0 1
2: 0 0 0 | 0 0 1
4: 0 0 0 | 0 0 1
8: 0 0 0 | 0 0 1
16: 0 0 0 | 0 0 1
32: 0 0 0 | 0 0 1
64: 0 0 0 | 0 0 1
128: 0 0 0 | 0 0 1
256: 0 0 0 | 66 1 3
512: 0 0 0 | 132 3 7
1024: 0 0 0 | 264 7 14
2048: 0 0 0 | 528 14 28
4096: 0 0 0 | 864 23 52
8192: 0 0 0 | 1728 47 100
osc.hasanfs-OST0001-osc-ffff882fdc929800.rpc_stats=
snapshot_time: 1634137979.39942 (secs.usecs)
read RPCs in flight: 0
write RPCs in flight: 0
pending write pages: 0
pending read pages: 0
read write
pages per rpc rpcs % cum % | rpcs % cum %
1: 61 93 93 | 0 0 0
2: 1 1 95 | 0 0 0
4: 0 0 95 | 0 0 0
8: 0 0 95 | 0 0 0
16: 0 0 95 | 0 0 0
32: 0 0 95 | 0 0 0
64: 0 0 95 | 0 0 0
128: 1 1 96 | 1 0 0
256: 2 3 100 | 3792 99 100
read write
rpcs in flight rpcs % cum % | rpcs % cum %
0: 0 0 0 | 0 0 0
1: 63 96 96 | 45 1 1
2: 2 3 100 | 45 1 2
3: 0 0 100 | 45 1 3
4: 0 0 100 | 45 1 4
5: 0 0 100 | 45 1 5
6: 0 0 100 | 49 1 7
7: 0 0 100 | 47 1 8
8: 0 0 100 | 57 1 9
9: 0 0 100 | 2235 58 68
10: 0 0 100 | 1180 31 100
read write
offset rpcs % cum % | rpcs % cum %
0: 1 1 1 | 69 1 1
1: 1 1 3 | 0 0 1
2: 4 6 9 | 0 0 1
4: 8 12 21 | 0 0 1
8: 13 20 41 | 0 0 1
16: 31 47 89 | 0 0 1
32: 7 10 100 | 0 0 1
64: 0 0 100 | 0 0 1
128: 0 0 100 | 0 0 1
256: 0 0 100 | 69 1 3
512: 0 0 100 | 139 3 7
1024: 0 0 100 | 276 7 14
2048: 0 0 100 | 552 14 29
4096: 0 0 100 | 896 23 52
8192: 0 0 100 | 1792 47 100
osc.hasanfs-OST0002-osc-ffff882fdc929800.rpc_stats=
snapshot_time: 1634137979.41185 (secs.usecs)
read RPCs in flight: 0
write RPCs in flight: 0
pending write pages: 0
pending read pages: 0
read write
pages per rpc rpcs % cum % | rpcs % cum %
1: 0 0 0 | 0 0 0
2: 0 0 0 | 0 0 0
4: 0 0 0 | 0 0 0
8: 0 0 0 | 0 0 0
16: 0 0 0 | 0 0 0
32: 0 0 0 | 0 0 0
64: 0 0 0 | 0 0 0
128: 0 0 0 | 0 0 0
256: 0 0 0 | 3776 100 100
read write
rpcs in flight rpcs % cum % | rpcs % cum %
0: 0 0 0 | 0 0 0
1: 0 0 0 | 43 1 1
2: 0 0 0 | 45 1 2
3: 0 0 0 | 46 1 3
4: 0 0 0 | 46 1 4
5: 0 0 0 | 47 1 6
6: 0 0 0 | 48 1 7
7: 0 0 0 | 48 1 8
8: 0 0 0 | 53 1 9
9: 0 0 0 | 2283 60 70
10: 0 0 0 | 1116 29 99
11: 0 0 0 | 1 0 100
read write
offset rpcs % cum % | rpcs % cum %
0: 0 0 0 | 68 1 1
1: 0 0 0 | 0 0 1
2: 0 0 0 | 0 0 1
4: 0 0 0 | 0 0 1
8: 0 0 0 | 0 0 1
16: 0 0 0 | 0 0 1
32: 0 0 0 | 0 0 1
64: 0 0 0 | 0 0 1
128: 0 0 0 | 0 0 1
256: 0 0 0 | 68 1 3
512: 0 0 0 | 136 3 7
1024: 0 0 0 | 272 7 14
2048: 0 0 0 | 544 14 28
4096: 0 0 0 | 896 23 52
8192: 0 0 0 | 1792 47 100
osc.hasanfs-OST0003-osc-ffff882fdc929800.rpc_stats=
snapshot_time: 1634137979.42422 (secs.usecs)
read RPCs in flight: 0
write RPCs in flight: 0
pending write pages: 0
pending read pages: 0
read write
pages per rpc rpcs % cum % | rpcs % cum %
1: 0 0 0 | 0 0 0
2: 0 0 0 | 0 0 0
4: 0 0 0 | 0 0 0
8: 0 0 0 | 0 0 0
16: 0 0 0 | 0 0 0
32: 0 0 0 | 0 0 0
64: 0 0 0 | 0 0 0
128: 0 0 0 | 0 0 0
256: 0 0 0 | 4048 100 100
read write
rpcs in flight rpcs % cum % | rpcs % cum %
0: 0 0 0 | 0 0 0
1: 0 0 0 | 44 1 1
2: 0 0 0 | 44 1 2
3: 0 0 0 | 44 1 3
4: 0 0 0 | 44 1 4
5: 0 0 0 | 46 1 5
6: 0 0 0 | 46 1 6
7: 0 0 0 | 48 1 7
8: 0 0 0 | 52 1 9
9: 0 0 0 | 2487 61 70
10: 0 0 0 | 1193 29 100
read write
offset rpcs % cum % | rpcs % cum %
0: 0 0 0 | 73 1 1
1: 0 0 0 | 0 0 1
2: 0 0 0 | 0 0 1
4: 0 0 0 | 0 0 1
8: 0 0 0 | 0 0 1
16: 0 0 0 | 0 0 1
32: 0 0 0 | 0 0 1
64: 0 0 0 | 0 0 1
128: 0 0 0 | 0 0 1
256: 0 0 0 | 73 1 3
512: 0 0 0 | 146 3 7
1024: 0 0 0 | 292 7 14
2048: 0 0 0 | 584 14 28
4096: 0 0 0 | 960 23 52
8192: 0 0 0 | 1920 47 100
osc.hasanfs-OST0004-osc-ffff882fdc929800.rpc_stats=
snapshot_time: 1634137979.43647 (secs.usecs)
read RPCs in flight: 0
write RPCs in flight: 0
pending write pages: 0
pending read pages: 0
read write
pages per rpc rpcs % cum % | rpcs % cum %
1: 0 0 0 | 0 0 0
2: 0 0 0 | 0 0 0
4: 0 0 0 | 0 0 0
8: 0 0 0 | 0 0 0
16: 0 0 0 | 0 0 0
32: 0 0 0 | 0 0 0
64: 0 0 0 | 0 0 0
128: 0 0 0 | 0 0 0
256: 0 0 0 | 4048 100 100
read write
rpcs in flight rpcs % cum % | rpcs % cum %
0: 0 0 0 | 0 0 0
1: 0 0 0 | 58 1 1
2: 0 0 0 | 60 1 2
3: 0 0 0 | 60 1 4
4: 0 0 0 | 60 1 5
5: 0 0 0 | 60 1 7
6: 0 0 0 | 61 1 8
7: 0 0 0 | 62 1 10
8: 0 0 0 | 69 1 12
9: 0 0 0 | 2553 63 75
10: 0 0 0 | 1005 24 100
read write
offset rpcs % cum % | rpcs % cum %
0: 0 0 0 | 73 1 1
1: 0 0 0 | 0 0 1
2: 0 0 0 | 0 0 1
4: 0 0 0 | 0 0 1
8: 0 0 0 | 0 0 1
16: 0 0 0 | 0 0 1
32: 0 0 0 | 0 0 1
64: 0 0 0 | 0 0 1
128: 0 0 0 | 0 0 1
256: 0 0 0 | 73 1 3
512: 0 0 0 | 146 3 7
1024: 0 0 0 | 292 7 14
2048: 0 0 0 | 584 14 28
4096: 0 0 0 | 960 23 52
8192: 0 0 0 | 1920 47 100
osc.hasanfs-OST0005-osc-ffff882fdc929800.rpc_stats=
snapshot_time: 1634137979.44994 (secs.usecs)
read RPCs in flight: 0
write RPCs in flight: 0
pending write pages: 0
pending read pages: 0
read write
pages per rpc rpcs % cum % | rpcs % cum %
1: 0 0 0 | 0 0 0
2: 0 0 0 | 0 0 0
4: 0 0 0 | 0 0 0
8: 0 0 0 | 0 0 0
16: 0 0 0 | 0 0 0
32: 0 0 0 | 0 0 0
64: 0 0 0 | 0 0 0
128: 0 0 0 | 0 0 0
256: 0 0 0 | 3840 100 100
read write
rpcs in flight rpcs % cum % | rpcs % cum %
0: 0 0 0 | 0 0 0
1: 0 0 0 | 38 0 0
2: 0 0 0 | 38 0 1
3: 0 0 0 | 38 0 2
4: 0 0 0 | 38 0 3
5: 0 0 0 | 38 0 4
6: 0 0 0 | 39 1 5
7: 0 0 0 | 39 1 6
8: 0 0 0 | 47 1 8
9: 0 0 0 | 2271 59 67
10: 0 0 0 | 1253 32 99
11: 0 0 0 | 1 0 100
read write
offset rpcs % cum % | rpcs % cum %
0: 0 0 0 | 69 1 1
1: 0 0 0 | 0 0 1
2: 0 0 0 | 0 0 1
4: 0 0 0 | 0 0 1
8: 0 0 0 | 0 0 1
16: 0 0 0 | 0 0 1
32: 0 0 0 | 0 0 1
64: 0 0 0 | 0 0 1
128: 0 0 0 | 0 0 1
256: 0 0 0 | 69 1 3
512: 0 0 0 | 138 3 7
1024: 0 0 0 | 276 7 14
2048: 0 0 0 | 552 14 28
4096: 0 0 0 | 912 23 52
8192: 0 0 0 | 1824 47 100
osc.hasanfs-OST0006-osc-ffff882fdc929800.rpc_stats=
snapshot_time: 1634137979.46277 (secs.usecs)
read RPCs in flight: 0
write RPCs in flight: 0
pending write pages: 0
pending read pages: 0
read write
pages per rpc rpcs % cum % | rpcs % cum %
1: 0 0 0 | 0 0 0
2: 0 0 0 | 0 0 0
4: 0 0 0 | 0 0 0
8: 0 0 0 | 0 0 0
16: 0 0 0 | 0 0 0
32: 0 0 0 | 0 0 0
64: 0 0 0 | 1 0 0
128: 0 0 0 | 0 0 0
256: 0 0 0 | 4016 99 100
read write
rpcs in flight rpcs % cum % | rpcs % cum %
0: 0 0 0 | 0 0 0
1: 0 0 0 | 2 0 0
2: 0 0 0 | 3 0 0
3: 0 0 0 | 4 0 0
4: 0 0 0 | 3 0 0
5: 0 0 0 | 4 0 0
6: 0 0 0 | 6 0 0
7: 0 0 0 | 9 0 0
8: 0 0 0 | 72 1 2
9: 0 0 0 | 2083 51 54
10: 0 0 0 | 1768 44 98
11: 0 0 0 | 56 1 99
12: 0 0 0 | 5 0 99
13: 0 0 0 | 1 0 99
14: 0 0 0 | 1 0 100
read write
offset rpcs % cum % | rpcs % cum %
0: 0 0 0 | 74 1 1
1: 0 0 0 | 0 0 1
2: 0 0 0 | 0 0 1
4: 0 0 0 | 0 0 1
8: 0 0 0 | 0 0 1
16: 0 0 0 | 0 0 1
32: 0 0 0 | 0 0 1
64: 0 0 0 | 0 0 1
128: 0 0 0 | 0 0 1
256: 0 0 0 | 74 1 3
512: 0 0 0 | 148 3 7
1024: 0 0 0 | 296 7 14
2048: 0 0 0 | 592 14 29
4096: 0 0 0 | 977 24 53
8192: 0 0 0 | 1856 46 100
osc.hasanfs-OST0007-osc-ffff882fdc929800.rpc_stats=
snapshot_time: 1634137979.47655 (secs.usecs)
read RPCs in flight: 0
write RPCs in flight: 0
pending write pages: 0
pending read pages: 0
read write
pages per rpc rpcs % cum % | rpcs % cum %
1: 0 0 0 | 0 0 0
2: 0 0 0 | 0 0 0
4: 0 0 0 | 0 0 0
8: 0 0 0 | 0 0 0
16: 0 0 0 | 0 0 0
32: 0 0 0 | 0 0 0
64: 0 0 0 | 0 0 0
128: 0 0 0 | 0 0 0
256: 0 0 0 | 3648 100 100
read write
rpcs in flight rpcs % cum % | rpcs % cum %
0: 0 0 0 | 0 0 0
1: 0 0 0 | 7 0 0
2: 0 0 0 | 8 0 0
3: 0 0 0 | 8 0 0
4: 0 0 0 | 9 0 0
5: 0 0 0 | 10 0 1
6: 0 0 0 | 13 0 1
7: 0 0 0 | 34 0 2
8: 0 0 0 | 134 3 6
9: 0 0 0 | 1929 52 58
10: 0 0 0 | 1434 39 98
11: 0 0 0 | 61 1 99
12: 0 0 0 | 1 0 100
read write
offset rpcs % cum % | rpcs % cum %
0: 0 0 0 | 66 1 1
1: 0 0 0 | 0 0 1
2: 0 0 0 | 0 0 1
4: 0 0 0 | 0 0 1
8: 0 0 0 | 0 0 1
16: 0 0 0 | 0 0 1
32: 0 0 0 | 0 0 1
64: 0 0 0 | 0 0 1
128: 0 0 0 | 0 0 1
256: 0 0 0 | 66 1 3
512: 0 0 0 | 132 3 7
1024: 0 0 0 | 264 7 14
2048: 0 0 0 | 528 14 28
4096: 0 0 0 | 864 23 52
8192: 0 0 0 | 1728 47 100</code></pre></div><div class="gmail-snippet-clipboard-content gmail-position-relative gmail-overflow-auto" style="box-sizing:border-box;color:rgb(36,41,47);font-family:-apple-system,system-ui,"Segoe UI",Helvetica,Arial,sans-serif,"Apple Color Emoji","Segoe UI Emoji";overflow:auto;margin-bottom:0px"><pre style="box-sizing:border-box;font-family:ui-monospace,SFMono-Regular,"SF Mono",Menlo,Consolas,"Liberation Mono",monospace;margin-top:0px;margin-bottom:16px;padding:16px;overflow:auto;line-height:1.45;border-radius:6px"><code style="box-sizing:border-box;font-family:ui-monospace,SFMono-Regular,"SF Mono",Menlo,Consolas,"Liberation Mono",monospace;padding:0px;margin:0px;background:transparent;border-radius:6px;word-break:normal;border:0px;display:inline;overflow:visible;line-height:inherit">- Md. Hasanur Rashid</code></pre></div></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, Oct 13, 2021 at 10:40 AM Patrick Farrell <<a href="mailto:pfarrell@ddn.com">pfarrell@ddn.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div dir="ltr">[<strong><font color="red">Caution</font></strong>: Email from External Sender. Do not click or open links or attachments unless you know this sender.]<br><span> </span>
<div style="font-family:Calibri,Arial,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)">
Interesting! Usually that would flush the cache, but s<span style="background-color:rgb(255,255,255);display:inline">ometimes drop caches doesn't seem to cause recently written data to be flushed, even after sync. (There is probably a minor bug
here.) But try this:</span></div>
<div style="font-family:Calibri,Arial,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)">
<br>
</div>
<div style="font-family:Calibri,Arial,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)">
After your sync and drop caches commands, clear all the LDLM locks on the client:</div>
<div style="font-family:Calibri,Arial,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)">
<br>
</div>
<div style="font-family:Calibri,Arial,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)">
lctl set_param ldlm.namespaces.*.lru_size=clear</div>
<div style="font-family:Calibri,Arial,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)">
<br>
</div>
<div style="font-family:Calibri,Arial,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)">
That removes the LDLM locks protecting the data, which should force it out of cache. Note that locks aren't cleared if there's dirty data, so sync first is essential.</div>
<div style="font-family:Calibri,Arial,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)">
</div>
<div style="font-family:Calibri,Arial,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)">
<br>
</div>
<div style="font-family:Calibri,Arial,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)">
-Patrick</div>
<div id="gmail-m_4860994253141422088appendonsend"></div>
<hr style="display:inline-block;width:98%">
<div id="gmail-m_4860994253141422088divRplyFwdMsg" dir="ltr"><font face="Calibri, sans-serif" style="font-size:11pt" color="#000000"><b>From:</b> lustre-discuss <<a href="mailto:lustre-discuss-bounces@lists.lustre.org" target="_blank">lustre-discuss-bounces@lists.lustre.org</a>> on behalf of Md Hasanur Rashid via lustre-discuss <<a href="mailto:lustre-discuss@lists.lustre.org" target="_blank">lustre-discuss@lists.lustre.org</a>><br>
<b>Sent:</b> Wednesday, October 13, 2021 7:36 AM<br>
<b>To:</b> <a href="mailto:lustre-discuss@lists.lustre.org" target="_blank">lustre-discuss@lists.lustre.org</a> <<a href="mailto:lustre-discuss@lists.lustre.org" target="_blank">lustre-discuss@lists.lustre.org</a>><br>
<b>Subject:</b> [lustre-discuss] No read throughput shown for the sequential read write Filebench workload</font>
<div> </div>
</div>
<div>
<div dir="ltr">
<div style="font-family:verdana,sans-serif">Hello Everyone,<br>
<br>
<p style="box-sizing:border-box;margin-bottom:16px;color:rgb(36,41,47);font-size:14px;margin-top:0px">
I am running a Filebench workload which is provided below:</p>
<div style="box-sizing:border-box;color:rgb(36,41,47);font-size:14px;overflow:auto">
<pre style="box-sizing:border-box;font-family:ui-monospace,SFMono-Regular,"SF Mono",Menlo,Consolas,"Liberation Mono",monospace;font-size:11.9px;margin-top:0px;margin-bottom:16px;padding:16px;overflow:auto;line-height:1.45;border-radius:6px"><code style="box-sizing:border-box;font-family:ui-monospace,SFMono-Regular,"SF Mono",Menlo,Consolas,"Liberation Mono",monospace;font-size:11.9px;padding:0px;margin:0px;background:transparent;border-radius:6px;word-break:normal;border:0px;display:inline;overflow:visible;line-height:inherit">define fileset name="testF",entries=100,filesize=16m,prealloc,path="/mnt/hasanfs/tmp1"
define process name="readerP",instances=2 {
thread name="readerT",instances=4 {
flowop openfile name="openOP",filesetname="testF"
flowop writewholefile name="writeOP",iters=4,filesetname="testF"
flowop readwholefile name="readOP",iters=1,filesetname="testF"
flowop closefile name="closeOP"
}
}
create files
system "sync"
system "echo 3 > /proc/sys/vm/drop_caches"
run 60
</code></pre>
</div>
<p style="box-sizing:border-box;margin-top:0px;margin-bottom:16px;color:rgb(36,41,47);font-size:14px">
I am running the workload in a Lustre cluster. When I check the log from the server-side, it shows the following (I have provided stats of one OSS):</p>
<div style="box-sizing:border-box;color:rgb(36,41,47);font-size:14px;overflow:auto">
<pre style="box-sizing:border-box;font-family:ui-monospace,SFMono-Regular,"SF Mono",Menlo,Consolas,"Liberation Mono",monospace;font-size:11.9px;margin-top:0px;margin-bottom:16px;padding:16px;overflow:auto;line-height:1.45;border-radius:6px"><code style="box-sizing:border-box;font-family:ui-monospace,SFMono-Regular,"SF Mono",Menlo,Consolas,"Liberation Mono",monospace;font-size:11.9px;padding:0px;margin:0px;background:transparent;border-radius:6px;word-break:normal;border:0px;display:inline;overflow:visible;line-height:inherit">obdfilter.hasanfs-OST0000.stats=
snapshot_time 1633361244.519122 secs.usecs
read_bytes 1 samples [bytes] 4096 4096 4096
write_bytes 8479 samples [bytes] 1048576 1048576 8890875904
destroy 13 samples [reqs]
statfs 76 samples [reqs]
preprw 8480 samples [reqs]
commitrw 8480 samples [reqs]
ping 57 samples [reqs]
obdfilter.hasanfs-OST0000.brw_stats=
snapshot_time: 1633361244.519588 (secs.usecs)
read | write
pages per bulk r/w rpcs % cum % | rpcs % cum %
256: 0 0 0 | 8479 100 100
read | write
discontiguous pages rpcs % cum % | rpcs % cum %
0: 0 0 0 | 8479 100 100
read | write
discontiguous blocks rpcs % cum % | rpcs % cum %
0: 0 0 0 | 8479 100 100
read | write
disk fragmented I/Os ios % cum % | ios % cum %
1: 0 0 0 | 7499 88 88
2: 0 0 0 | 980 11 100
read | write
disk I/Os in flight ios % cum % | ios % cum %
1: 0 0 0 | 7921 83 83
2: 0 0 0 | 1407 14 98
3: 0 0 0 | 116 1 99
4: 0 0 0 | 15 0 100
read | write
I/O time (1/1000s) ios % cum % | ios % cum %
2: 0 0 0 | 1963 23 23
4: 0 0 0 | 5627 66 89
8: 0 0 0 | 837 9 99
16: 0 0 0 | 25 0 99
32: 0 0 0 | 27 0 100
read | write
disk I/O size ios % cum % | ios % cum %
4K: 0 0 0 | 62 0 0
8K: 0 0 0 | 127 1 1
16K: 0 0 0 | 113 1 3
32K: 0 0 0 | 0 0 3
64K: 0 0 0 | 0 0 3
128K: 0 0 0 | 65 0 3
256K: 0 0 0 | 0 0 3
512K: 0 0 0 | 1127 11 15
1M: 0 0 0 | 7965 84 100
</code></pre>
</div>
<p style="box-sizing:border-box;margin-top:0px;color:rgb(36,41,47);font-size:14px;margin-bottom:0px">
<span style="box-sizing:border-box"><b>Can anyone please explain to me why I am not seeing any read operations in the stats?</b><br>
<br>
Thanks,<br>
Md. Hasanur Rashid</span></p>
</div>
</div>
</div>
</div>
</blockquote></div>