[Lustre-discuss] ldiskfs for MDT and zfs for OSTs?

Anjana Kar kar at psc.edu
Tue Oct 15 12:55:25 PDT 2013


I'd like to report that we've had success in setting up an
ldiskfs MDT and zfs OSTs on a single node with the
version lustre 2.4 g1cff80a. Something must have been
fixed/changed  in this tree since the install steps didn't
change as far as I can tell.

Also, I was making rpms from spl and zfs source, so didn't
have to add anything to /etc/ld.so.conf.d, but thanks for the
suggestions. On to the testing stage...

-Anjana

On 10/09/2013 05:30 AM, Thomas Stibor wrote:
> Hello Anjana,
>
> I can confirm that this setup works (ZFS-MGS/MDT or LDFISKFS-MGS/MDT and
> ZFS-OSS/OST)
>
> I used a Cent OS 6.4
> build:
> 2.4.0-RC2-gd3f91c4-PRISTINE-2.6.32-358.6.2.el6_lustre.g230b174.x86_64
> and the Lustre Packages from
> http://downloads.whamcloud.com/public/lustre/latest-feature-release/el6/server/RPMS/x86_64/
>
> ZFS is downloaded from ZOL and compiled/installed.
>
> SPL: Loaded module v0.6.2-1
> SPL: using hostid 0x00000000
> ZFS: Loaded module v0.6.2-1, ZFS pool version 5000, ZFS filesystem version 5
>
> I first run in the same problem:
>
> mkfs.lustre --fsname=lustrefs --reformat --ost --backfstype=zfs .....
> mkfs.lustre FATAL: unable to prepare backend (22)
> mkfs.lustre: exiting with 22 (Invalid argument)
>
> and saw that ZFS libraries in /usr/local/lib where not known to Cent OS 6.4.
>
> A quick:
>
> echo "/usr/local/lib" >> /etc/ld.so.conf.d/zfs.conf
> echo "/usr/local/lib64" >> /etc/ld.so.conf.d/zfs.conf
> ldconfig
>
> solved the problem.
>
> (LDISKFS)
> mkfs.lustre --reformat --mgs /dev/sda16
> mkfs.lustre --reformat --fsname=zlust --mgsnode=10.16.0.104 at o2ib0 --mdt
> --index=0 /dev/sda5
>
> (ZFS)
> mkfs.lustre --reformat --mgs --backfstype=zfs mgs/mgs /dev/sda16
> mkfs.lustre --reformat --fsname=zlust --mgsnode=10.16.0.104 at o2ib0 --mdt
> --index=0 --backfstype=zfs mdt0/mdt0 /dev/sda5
>
> is working fine.
> The OSS/OST is a debian wheezy box with 70 disks JBOD and kernel
> 3.6.11-lustre-tstibor-build with patch series 3.x-fc18.series
> and SPL/ZFS v0.6.2-1
>
> Best,
>   Thomas
>
> On 10/08/2013 05:40 PM, Anjana Kar wrote:
>> The git checkout was on Sep. 20. Was the patch before or after?
>>
>> The zpool create command successfully creates a raidz2 pool, and mkfs.lustre
>> does not complain, but
>>
>> [root at cajal kar]# zpool list
>> NAME          SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
>> lustre-ost0  36.2T  2.24M  36.2T     0%  1.00x  ONLINE  -
>>
>> [root at cajal kar]# /usr/sbin/mkfs.lustre --fsname=cajalfs --ost
>> --backfstype=zfs --index=0 --mgsnode=10.10.101.171 at o2ib lustre-ost0
>>
>> [root at cajal kar]# /sbin/service lustre start lustre-ost0
>> lustre-ost0 is not a valid lustre label on this node
>>
>> I think we'll be splitting up the MDS and OSTs on 2 nodes as some of you
>> said
>> there could be other issues down the road, but thanks for all the good
>> suggestions.
>>
>> -Anjana
>>
>> On 10/07/2013 07:24 PM, Ned Bass wrote:
>>> I'm guessing your git checkout doesn't include this commit:
>>>
>>> * 010a78e Revert "LU-3682 tunefs: prevent tunefs running on a mounted device"
>>>
>>> It looks like the LU-3682 patch introduced a bug that could cause your issue,
>>> so its reverted in the latest master.
>>>
>>> Ned
>>>
>>> On Mon, Oct 07, 2013 at 04:54:13PM -0400, Anjana Kar wrote:
>>>> On 10/07/2013 04:27 PM, Ned Bass wrote:
>>>>> On Mon, Oct 07, 2013 at 02:23:32PM -0400, Anjana Kar wrote:
>>>>>> Here is the exact command used to create a raidz2 pool with 8+2 drives,
>>>>>> followed by the error messages:
>>>>>>
>>>>>> mkfs.lustre --fsname=cajalfs --reformat --ost --backfstype=zfs
>>>>>> --index=0 --mgsnode=10.10.101.171 at o2ib lustre-ost0/ost0 raidz2
>>>>>> /dev/sda /dev/sdc /dev/sde /dev/sdg /dev/sdi /dev/sdk /dev/sdm
>>>>>> /dev/sdo /dev/sdq /dev/sds
>>>>>>
>>>>>> mkfs.lustre FATAL: Invalid filesystem name /dev/sds
>>>>> It seems that either the version of mkfs.lustre you are using has a
>>>>> parsing bug, or there was some sort of syntax error in the actual
>>>>> command entered.  If you are certain your command line is free from
>>>>> errors, please post the version of lustre you are using, or report the
>>>>> bug in the Lustre issue tracker.
>>>>>
>>>>> Thanks,
>>>>> Ned
>>>> For building this server, I followed steps from the walk-thru-build*
>>>> for Centos 6.4,
>>>> and added --with-spl and --with-zfs when configuring lustre..
>>>> *https://wiki.hpdd.intel.com/pages/viewpage.action?pageId=8126821
>>>>
>>>> spl and zfs modules were installed from source for the lustre 2.4 kernel
>>>> 2.6.32.358.18.1.el6_lustre2.4
>>>>
>>>> Device sds appears to be valid, but I will try issuing the command
>>>> using by-path
>>>> names..
>>>>
>>>> -Anjana
>> _______________________________________________
>> Lustre-discuss mailing list
>> Lustre-discuss at lists.lustre.org
>> http://lists.lustre.org/mailman/listinfo/lustre-discuss
>
>
>
> _______________________________________________
> Lustre-discuss mailing list
> Lustre-discuss at lists.lustre.org
> http://lists.lustre.org/mailman/listinfo/lustre-discuss

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20131015/742864d9/attachment.htm>


More information about the lustre-discuss mailing list