[Lustre-discuss] SLES11, lustre 1.82 with lvm and multipathing problems

lustre lustre-info at navum.de
Wed Jun 16 02:23:42 PDT 2010


Hello Folks,

we have one LUN on our MGS|MGT Server.
The LUN is availible over two pathes.
( Multipathing with OS embeded rdac driver, SLES11)

snowball-mds2:/proc/fs # multipath -ll

3600a0b80005a7215000002034b952b00 dm-10 SUN,LCSM100_S

[size=419G][features=1 queue_if_no_path][hwhandler=1 rdac][rw]

\_ round-robin 0 [prio=6][active]

  \_ 6:0:1:1 sdd 8:48 [active][ready]

\_ round-robin 0 [prio=1][enabled]

  \_ 6:0:0:1 sdb 8:16 [active][ghost]

We create an LVM device on this LUN.

snowball-mds2:~ # lvscan

   ACTIVE            '/dev/mds2/mgs2' [418.68 GB] inherit

everything works fine.

Now we switched the controller on the storage to simulate a path failover:

# multipath -ll

3600a0b80005a7215000002034b952b00 dm-10 SUN,LCSM100_S

[size=419G][features=1 queue_if_no_path][hwhandler=1 rdac][rw]

\_ round-robin 0 [prio=1][enabled]

  \_ 6:0:1:1 sdd 8:48 [active][ghost]

\_ round-robin 0 [prio=6][enabled]

  \_ 6:0:0:1 sdb 8:16 [active][ready]


after that the MDT Device is unhealthy:

snowball-mds2:/proc/fs # cat /proc/fs/lustre/health_check

device tools-MDT0000 reported unhealthy

NOT HEALTHY

and we can not remount the filesystem -> the filesystem is not writeable

We can see this in /var/log/messages, as there is a warning about this 
filesystem beeing in read-only mode.


snowball-mds2:/proc/fs # tunefs.lustre --dryrun /dev/mds2/mgs2

checking for existing Lustre data: found CONFIGS/mountdata

Reading CONFIGS/mountdata

    Read previous values:

Target:

Index:      unassigned

Lustre FS:  lustre

Mount type: ldiskfs

Flags:      0x70

               (needs_index first_time update )

Persistent mount opts:

Parameters:


tunefs.lustre FATAL: must set target type: MDT,OST,MGS

tunefs.lustre: exiting with 22 (Invalid argument)



after a reboot everything works fine again.
Is there a problem this the lvm configuration?
We found an document to enable multipathing on lvm2, but it doesent work.

Is lustre 1.8.2 supported on lvm and multipathing?


We are concerning about the availability and consistency about the 
lustre filesystem e.g. metadata. Because
the metadata isn't correctly acailable after a path failover of the 
metadata-(MDT)-device. The path-failover should
be absolutely transparent to the LVM LUN used for the MDT and the 
Lustre-FS on it. Is this correct?
We tested the path-failover functionality with a simple ext3-fs on the 
device and we could not see any problem.
Also I think it is not recommended to configure the lustre filesystem to 
remain in write-mode when an "error" accours, isnt't it?


Does anyone have experiance with the above mentioned configuration? Are 
there any known bugs?


Thanks and regards

Matthias


Additional Information:

snowball-mds2:/proc/fs # uname -a

Linux snowball-mds2 2.6.27.39-0.3_lustre.1.8.2-default #1 SMP 2009-11-23 
12:57:38 +0100 x86_64 x86_64 x86_64 GNU/Linux
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20100616/dd85ec9f/attachment.htm>


More information about the lustre-discuss mailing list