[Lustre-discuss] ldiskfs for MDT and zfs for OSTs?

Ned Bass bass6 at llnl.gov
Tue Oct 8 09:28:05 PDT 2013


On Tue, Oct 08, 2013 at 11:40:30AM -0400, Anjana Kar wrote:
> The git checkout was on Sep. 20. Was the patch before or after?

The bug was introduced on Sep. 10 and reverted on Sep. 24, so you hit
the lucky window.  :)

> The zpool create command successfully creates a raidz2 pool, and mkfs.lustre
> does not complain, but

The pool you created with zpool create was just for testing.  I would
recommend destroying that pool, rebuilding your lustre packages from the
latest master (or better yet, a stable tag such as v2_4_1_0), and
starting over with your original mkfs.lustre command.  This would ensure
that your pool is properly configured for use with lustre.

If you'd prefer to keep this pool, you should set canmount=off on the
root dataset, as mkfs.lustre would have done:

  zfs set canmount=off lustre-ost0

> 
> [root at cajal kar]# zpool list
> NAME          SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
> lustre-ost0  36.2T  2.24M  36.2T     0%  1.00x  ONLINE  -
> 
> [root at cajal kar]# /usr/sbin/mkfs.lustre --fsname=cajalfs --ost
> --backfstype=zfs --index=0 --mgsnode=10.10.101.171 at o2ib lustre-ost0

This command seems to be missing the dataset name, i.e. lustre-ost0/ost0

> 
> [root at cajal kar]# /sbin/service lustre start lustre-ost0
> lustre-ost0 is not a valid lustre label on this node

As mentioned elsewhere, this looks like an ldev.conf configuration
error.

Ned



More information about the lustre-discuss mailing list