[Lustre-discuss] Fwd: max_sectors_kb change doesn't help

Erich Focht efocht at hpce.nec.com
Thu Sep 27 02:28:50 PDT 2007


On Thursday 27 September 2007 10:52, Andreas Dilger wrote:

> In fact there isn't any such detection in Lustre - it will push pages into
> an IO until the block layer tells it to stop.
> 
> Please check /proc/fs/lustre/obdfilter/*/brw_stats to see if the IO requests
> coming from the client are 1MB in size (256 pages), and if yes then the issue
> would likely be in the block layer.

The output is below. I see 256 pages per transfer. But I also see "disk
fragmented I/Os". Sounds somehow related, but can I influence the
fragmentation?

BTW: I'm running on a RHEL5 system, with noop I/O scheduler. The disks are
now connected through Emulex FC controllers, but I see the same behavior
with SAS storage attached through LSI Logic HCAs.

                           read      |     write
pages per bulk r/w     rpcs  % cum % |  rpcs  % cum %
256:                     0   0   0   |  955 100 100

                           read      |     write
discontiguous pages    rpcs  % cum % |  rpcs  % cum %
0:                       0   0   0   |  955 100 100

                           read      |     write
discontiguous blocks   rpcs  % cum % |  rpcs  % cum %
0:                       0   0   0   |  955 100 100

                           read      |     write
disk fragmented I/Os   ios   % cum % |  ios   % cum %
2:                       0   0   0   |  955 100 100

                           read      |     write
disk I/Os in flight    ios   % cum % |  ios   % cum %
1:                       0   0   0   |  216  11  11
2:                       0   0   0   |  220  11  22
3:                       0   0   0   |  194  10  32
4:                       0   0   0   |  198  10  43
5:                       0   0   0   |  166   8  52
6:                       0   0   0   |  165   8  60
7:                       0   0   0   |  122   6  67
8:                       0   0   0   |  121   6  73
9:                       0   0   0   |  116   6  79
10:                      0   0   0   |  115   6  85
11:                      0   0   0   |   95   4  90
12:                      0   0   0   |   94   4  95
13:                      0   0   0   |   35   1  97
14:                      0   0   0   |   32   1  98
15:                      0   0   0   |    9   0  99
16:                      0   0   0   |    9   0  99
17:                      0   0   0   |    2   0  99
18:                      0   0   0   |    1   0 100

                           read      |     write
I/O time (1/1000s)     ios   % cum % |  ios   % cum %
4:                       0   0   0   |    3   0   0
8:                       0   0   0   |   17   1   2
16:                      0   0   0   |   98  10  12
32:                      0   0   0   |  326  34  46
64:                      0   0   0   |  370  38  85
128:                     0   0   0   |  129  13  98
256:                     0   0   0   |   10   1  99
512:                     0   0   0   |    2   0 100

                           read      |     write
disk I/O size          ios   % cum % |  ios   % cum %
512K:                    0   0   0   | 1910 100 100


Thanks,
best regards,
Erich


-- 
Dr. Erich Focht
Solution Architecture Group, Linux R&D
NEC High Performance Computing  Europe




More information about the lustre-discuss mailing list