[Lustre-discuss] tuning max_sectors

Andreas Dilger adilger at sun.com
Fri Apr 17 12:06:39 PDT 2009


On Apr 17, 2009  08:40 -0400, Robin Humble wrote:
> we have a similar problem.
>   Lustre: short-OST0001: underlying device md0 should be tuned for larger I/O requests: max_sectors = 1024 could be up to max_hw_sectors=1280
> 
> sadly, that sys entry doesn't exist:
>   cat: /sys/block/md0/queue/max_sectors_kb: No such file or directory
> 
> do you have any other suggestions?
> perhaps the devices below md need looking at?
> they all report /sys/block/sd*/queue/max_sectors_kb == 512.
> we have an md raid6 8+2.

Since MD RAID is really composed of underlying disks, and doing the
mapping from /dev/md0 -> /sys/block/sd* is difficult, mount.lustre
can't do the tuning itself.  Instead, you should add a line into
/etc/init.d/rc.local like:

for DEV in sdc sdl sdk sdj sdi sdh sdg sdf sde sdd; do
	cp /sys/block/$DEV/queue/{max_hw_sectors_kb,max_sectors_kb}
done

> uname -a
>   Linux sox2 2.6.18-92.1.10.el5_lustre.1.6.6.fixR5 #2 SMP Wed Feb 4 16:58:30 EST 2009 x86_64 x86_64 x86_64 GNU/Linux
> (which is 1.6.6 + the patch from bz 15428 which is (I think) now in 1.6.7.1)
> 
> cat /proc/mdstat
> ...
> md0 : active raid6 sdc[0] sdl[9] sdk[8] sdj[7] sdi[6] sdh[5] sdg[4] sdf[3] sde[2] sdd[1]
>       5860595712 blocks level 6, 64k chunk, algorithm 2 [10/10] [UUUUUUUUUU]
>                 in: 64205147 reads, 97489370 writes; out: 3730773413 reads, 3281459807 writes
>                 2222983790 in raid5d, 498868 out of stripes, 4280451425 handle called
>                 reads: 0 for rmw, 709671189 for rcw. zcopy writes: 1573400576, copied writes: 20983045
>                 0 delayed, 0 bit delayed, 0 active, queues: 0 in, 0 out
>                 0 expanding overlap
> 
> cheers,
> robin
> _______________________________________________
> Lustre-discuss mailing list
> Lustre-discuss at lists.lustre.org
> http://lists.lustre.org/mailman/listinfo/lustre-discuss

Cheers, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre Group
Sun Microsystems of Canada, Inc.




More information about the lustre-discuss mailing list