[Lustre-discuss] lustre and software RAID
Eudes PHILIPPE
eudes at cisneo.fr
Fri Jan 21 13:01:36 PST 2011
Im not an expert on lustre, just begin with it J but
What is your version of e2fsprogs?
What is your command line to format your raid?
Regards.
De : lustre-discuss-bounces at lists.lustre.org
[mailto:lustre-discuss-bounces at lists.lustre.org] De la part de Samuel
Aparicio
Envoyé : vendredi 21 janvier 2011 21:37
À : lustre-discuss at lists.lustre.org
Objet : [Lustre-discuss] lustre and software RAID
I am having the following issue:
trying to create an ext4 lustre filesystem attached to an OSS.
the disks being used are exported from an external disk enclosure.
i create a raid10 set with mdadm from 16 2Tb disks, this part seems fine.
I am able to format such an array with normal ext4, mount a filesytem etc.
however when i try the same thing, trying to format for a lustre filesystem
I am unable to mount the filesystem and lustre does not seem to detect it.
the lustre format completes normally, without errors.
If I arrange to present the disks as a RAID10 set from the external disk
enclosure, which has it's own internal RAID capability,
(rather than trying to use mdadm on the OSS) the lustre formatting works
fine and I can get a mountable OST.
the kernel log reports the following when a mount is attempted:
LDISKFS-fs (md2): VFS: Can't find ldiskfs filesystem
LustreError: 15241:0:(obd_mount.c:1292:server_kernel_mount()) premount
/dev/md2:0x0 ldiskfs failed: -22, ldiskfs2 failed: -19. Is the ldiskfs
module available?
LustreError: 15241:0:(obd_mount.c:1618:server_fill_super()) Unable to mount
device /dev/md2: -22
LustreError: 15241:0:(obd_mount.c:2050:lustre_fill_super()) Unable to mount
(-22)
lsmod reports that all the modules are loaded
fsck reports the following
fsck 1.41.10.sun2 (24-Feb-2010)
e2fsck 1.41.10.sun2 (24-Feb-2010)
fsck.ext2: Superblock invalid, trying backup blocks...
fsck.ext2: Bad magic number in super-block while trying to open /dev//md2
It would seem the filesystem has not been written properly, but mkfs reports
no errors ....
lustre version 1.8.4
kernel 2.6.18-194.3.1.el5_lustre.1.8.4
disk array is a coraid SATA/AOE device which has worked fine in every other
context
this seems like an interaction of lustre with software RAID on the OSS?
I wonder if anyone has seen anything like this before.
any ideas about this?
Professor Samuel Aparicio BM BCh PhD FRCPath
Nan and Lorraine Robertson Chair UBC/BC Cancer Agency
675 West 10th, Vancouver V5Z 1L3, Canada.
office: +1 604 675 8200 cellphone: +1 604 762 5178: lab website
http://molonc.bccrc.ca <http://molonc.bccrc.ca/>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20110121/c2cfd1e2/attachment.htm>
More information about the lustre-discuss
mailing list