[Lustre-discuss] tuning max_sectors

Robin Humble rjh+lustre at cita.utoronto.ca
Fri Apr 17 05:40:09 PDT 2009


On Fri, Apr 17, 2009 at 07:25:30AM -0400, Brian J. Murrell wrote:
>On Fri, 2009-04-17 at 13:08 +0200, Götz Waschk wrote:
>> Lustre: zn_atlas-OST0000: underlying device cciss/c1d0p1 should be tuned for larger I/O requests: max_sectors = 1024 could be up to max_hw_sectors=2048

we have a similar problem.
  Lustre: short-OST0001: underlying device md0 should be tuned for larger I/O requests: max_sectors = 1024 could be up to max_hw_sectors=1280

>> What can I do?
>IIRC, that's in reference to /sys/block/$device/queue/max_sectors_kb.
>If you inspect that it should report 1024.  You can simply echo a new
>value into that the way you can with /proc variables.

sadly, that sys entry doesn't exist:
  cat: /sys/block/md0/queue/max_sectors_kb: No such file or directory

do you have any other suggestions?
perhaps the devices below md need looking at?
they all report /sys/block/sd*/queue/max_sectors_kb == 512.
we have an md raid6 8+2.

uname -a
  Linux sox2 2.6.18-92.1.10.el5_lustre.1.6.6.fixR5 #2 SMP Wed Feb 4 16:58:30 EST 2009 x86_64 x86_64 x86_64 GNU/Linux
(which is 1.6.6 + the patch from bz 15428 which is (I think) now in 1.6.7.1)

cat /proc/mdstat
...
md0 : active raid6 sdc[0] sdl[9] sdk[8] sdj[7] sdi[6] sdh[5] sdg[4] sdf[3] sde[2] sdd[1]
      5860595712 blocks level 6, 64k chunk, algorithm 2 [10/10] [UUUUUUUUUU]
                in: 64205147 reads, 97489370 writes; out: 3730773413 reads, 3281459807 writes
                2222983790 in raid5d, 498868 out of stripes, 4280451425 handle called
                reads: 0 for rmw, 709671189 for rcw. zcopy writes: 1573400576, copied writes: 20983045
                0 delayed, 0 bit delayed, 0 active, queues: 0 in, 0 out
                0 expanding overlap

cheers,
robin



More information about the lustre-discuss mailing list