[Lustre-discuss] Fwd: max_sectors_kb change doesn't help

Erich Focht efocht at hpce.nec.com
Mon Oct 1 05:02:38 PDT 2007


Hi Andreas,

On Thursday 27 September 2007 12:34, Andreas Dilger wrote:
> > disk I/O size          ios   % cum % |  ios   % cum %
> > 512K:                    0   0   0   | 1910 100 100
> 
> This generally points to the underlying layer fragmenting the IO, since the
> "disk fragmented I/O" counter is only when we can't add a page to the exising
> bio (see "frags" in lustre/obdfilter/filter_io_26/filter_do_bio()).  The
> culprit is in "can_be_merged()" or "bio_add_page()".

the Lustre debugging messages look like this:
00002000:00000002:3:1191233501.646369:0:15619:0:(filter_io_26.c:339:filter_do_bio()) bio++ sz 524288 vcnt 128(256) sectors 1024(1024) psg 18(128) hsg 18(64)

and are printed by the code:
                                /* Dang! I have to fragment this I/O */
                                CDEBUG(D_INODE, "bio++ sz %d vcnt %d(%d) "
                                       "sectors %d(%d) psg %d(%d) hsg %d(%d)\n",
                                       bio->bi_size,
                                       bio->bi_vcnt, bio->bi_max_vecs,
                                       bio->bi_size >> 9, q->max_sectors,
                                       bio_phys_segments(q, bio),
                                       q->max_phys_segments,
                                       bio_hw_segments(q, bio),
                                       q->max_hw_segments);

This actually suggests that q->max_sectors is 1024, although
/sys/block/sd*/queue/max_sectors_kb is set to 2048 (i.e. the value should be
4096 sectors).

Could this problem come from multipath? It is "assembling" the dm-* devices
out of the SCSI devices, presents the SCSI devices as "slaves", but has no
own settings for the queue parameters in /sys/block/dm-*. I tried increasing
the SCSI member devices' queue max_sectors_kb before starting the multipathd,
but it didn't help. Uhmmm, yes, I am using multipath... forgot to mention
earlier.

Best regards,
Erich




More information about the lustre-discuss mailing list