[Lustre-discuss] lustre and software RAID

Samuel Aparicio saparicio at bccrc.ca
Fri Jan 21 13:55:15 PST 2011


e2fsprogs-1.41.10.sun2-0redhat.rhel5.x86_64

mkfs.lustre --ost --fsname=lustre --reformat --mgsnode=11.1.254.3 at tcp0 /dev/md2 

mdadm -v --create /dev/md2 --chunk=256 --level=raid10 --raid-devices=16 --spare-devices=1 --assume-clean --layout=n2 /dev/etherd/e5.9 /dev/etherd/e5.10 /dev/etherd/e5.11 /dev/etherd/e5.12 /dev/etherd/e5.13 /dev/etherd/e5.14 /dev/etherd/e5.15 /dev/etherd/e5.16 /dev/etherd/e5.17 /dev/etherd/e5.18 /dev/etherd/e5.19 /dev/etherd/e5.20 /dev/etherd/e5.21 /dev/etherd/e5.22 /dev/etherd/e5.23 /dev/etherd/e5.7 /dev/etherd/e5.8


cat /proc/mdstat

md2 : active raid10 etherd/e5.8[16](S) etherd/e5.7[15] etherd/e5.23[14] etherd/e5.22[13] etherd/e5.21[12] etherd/e5.20[11] etherd/e5.19[10] etherd/e5.18[9] etherd/e5.17[8] etherd/e5.16[7] etherd/e5.15[6] etherd/e5.14[5] etherd/e5.13[4] etherd/e5.12[3] etherd/e5.11[2] etherd/e5.10[1] etherd/e5.9[0]
      15628113920 blocks 256K chunks 2 near-copies [16/16] [UUUUUUUUUUUUUUUU]







Professor Samuel Aparicio BM BCh PhD FRCPath
Nan and Lorraine Robertson Chair UBC/BC Cancer Agency
675 West 10th, Vancouver V5Z 1L3, Canada.
office: +1 604 675 8200 cellphone: +1 604 762 5178: lab website http://molonc.bccrc.ca





On Jan 21, 2011, at 1:01 PM, Eudes PHILIPPE wrote:

> I’m not an expert on lustre, just begin with it J but…
>  
> What is your version of e2fsprogs?
>  
> What is your command line to format your raid?
>  
> Regards.
>  
> De : lustre-discuss-bounces at lists.lustre.org [mailto:lustre-discuss-bounces at lists.lustre.org] De la part de Samuel Aparicio
> Envoyé : vendredi 21 janvier 2011 21:37
> À : lustre-discuss at lists.lustre.org
> Objet : [Lustre-discuss] lustre and software RAID
>  
> I am having the following issue:
>  
> trying to create an ext4 lustre filesystem attached to an OSS.
> the disks being used are exported from an external disk enclosure.
> i create a raid10 set with mdadm from 16 2Tb disks, this part seems fine.
> I am able to format such an array with normal ext4, mount a filesytem etc.
> however when i try the same thing, trying to format for a lustre filesystem I am unable to mount the filesystem and lustre does not seem to detect it.
> the lustre format completes normally, without errors.
>  
> If I arrange to present the disks as a RAID10 set from the external disk enclosure, which has it's own internal RAID capability,
> (rather than trying to use mdadm on the OSS) the lustre formatting works fine and I can get a mountable OST.
>  
> the kernel log reports the following when a mount is attempted:
>  
> LDISKFS-fs (md2): VFS: Can't find ldiskfs filesystem
> LustreError: 15241:0:(obd_mount.c:1292:server_kernel_mount()) premount /dev/md2:0x0 ldiskfs failed: -22, ldiskfs2 failed: -19.  Is the ldiskfs module available?
> LustreError: 15241:0:(obd_mount.c:1618:server_fill_super()) Unable to mount device /dev/md2: -22
> LustreError: 15241:0:(obd_mount.c:2050:lustre_fill_super()) Unable to mount  (-22)
>  
> lsmod reports that all the modules are loaded
>  
> fsck reports the following
> fsck 1.41.10.sun2 (24-Feb-2010)
> e2fsck 1.41.10.sun2 (24-Feb-2010)
> fsck.ext2: Superblock invalid, trying backup blocks...
> fsck.ext2: Bad magic number in super-block while trying to open /dev//md2
>  
> It would seem the filesystem has not been written properly, but mkfs reports no errors ....
>  
> lustre version 1.8.4
> kernel 2.6.18-194.3.1.el5_lustre.1.8.4
> disk array is a coraid SATA/AOE device which has worked fine in every other context
>  
> this seems like an interaction of lustre with software RAID on the OSS?
> I wonder if anyone has seen anything like this before.
> any ideas about this?
>  
>  
>  
>  
> Professor Samuel Aparicio BM BCh PhD FRCPath
> Nan and Lorraine Robertson Chair UBC/BC Cancer Agency
> 675 West 10th, Vancouver V5Z 1L3, Canada.
> office: +1 604 675 8200 cellphone: +1 604 762 5178: lab website http://molonc.bccrc.ca
>  
>  
> 
> 
>  
> _______________________________________________
> Lustre-discuss mailing list
> Lustre-discuss at lists.lustre.org
> http://lists.lustre.org/mailman/listinfo/lustre-discuss

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20110121/21bd34d0/attachment.htm>


More information about the lustre-discuss mailing list