[lustre-discuss] Install issues on 2.10.0

John Casu john at chiraldynamics.com
Tue Jul 25 09:21:29 PDT 2017


Just installed latest 2.10.0 Lustre over ZFS on a vanilla Centos 7.3.1611 system, using dkms.
ZFS is 0.6.5.11 from zfsonlinux.org, installed w. yum

Not a single problem during installation, but I am having issues building a lustre filesystem:
1. Building a separate mgt doesn't seem to work properly, although the mgt/mdt combo
    seems to work just fine.
2. I get spl_hostid not set warnings, which I've never seen before
3. /proc/fs/lustre/health_check seems to be missing.

thanks,
-john c



---------
Building an mgt by itself doesn't seem to work properly:

> [root at fb-lts-mds0 x86_64]# mkfs.lustre --reformat --mgs --force-nohostid --servicenode=192.168.98.113 at tcp \
>                                        --backfstype=zfs mgs/mgt
> 
>    Permanent disk data:
> Target:     MGS
> Index:      unassigned
> Lustre FS:  
> Mount type: zfs
> Flags:      0x1064
>               (MGS first_time update no_primnode )
> Persistent mount opts: 
> Parameters: failover.node=192.168.98.113 at tcp
> WARNING: spl_hostid not set. ZFS has no zpool import protection
> mkfs_cmd = zfs create -o canmount=off -o xattr=sa mgs/mgt
> WARNING: spl_hostid not set. ZFS has no zpool import protection
> Writing mgs/mgt properties
>   lustre:failover.node=192.168.98.113 at tcp
>   lustre:version=1
>   lustre:flags=4196
>   lustre:index=65535
>   lustre:svname=MGS
> [root at fb-lts-mds0 x86_64]# mount.lustre mgs/mgt /mnt/mgs
> WARNING: spl_hostid not set. ZFS has no zpool import protection
> 
> mount.lustre FATAL: unhandled/unloaded fs type 0 'ext3'

If I build the combo mgt/mdt, things go a lot better:

> 
> [root at fb-lts-mds0 x86_64]# mkfs.lustre --reformat --mgs --mdt --force-nohostid --servicenode=192.168.98.113 at tcp --backfstype=zfs --index=0 --fsname=test meta/meta
> 
>    Permanent disk data:
> Target:     test:MDT0000
> Index:      0
> Lustre FS:  test
> Mount type: zfs
> Flags:      0x1065
>               (MDT MGS first_time update no_primnode )
> Persistent mount opts: 
> Parameters: failover.node=192.168.98.113 at tcp
> WARNING: spl_hostid not set. ZFS has no zpool import protection
> mkfs_cmd = zfs create -o canmount=off -o xattr=sa meta/meta
> WARNING: spl_hostid not set. ZFS has no zpool import protection
> Writing meta/meta properties
>   lustre:failover.node=192.168.98.113 at tcp
>   lustre:version=1
>   lustre:flags=4197
>   lustre:index=0
>   lustre:fsname=test
>   lustre:svname=test:MDT0000
> [root at fb-lts-mds0 x86_64]# mount.lustre meta/meta  /mnt/meta
> WARNING: spl_hostid not set. ZFS has no zpool import protection
> [root at fb-lts-mds0 x86_64]# df
> Filesystem          1K-blocks    Used Available Use% Mounted on
> /dev/mapper/cl-root  52403200 3107560  49295640   6% /
> devtmpfs             28709656       0  28709656   0% /dev
> tmpfs                28720660       0  28720660   0% /dev/shm
> tmpfs                28720660   17384  28703276   1% /run
> tmpfs                28720660       0  28720660   0% /sys/fs/cgroup
> /dev/sdb1             1038336  195484    842852  19% /boot
> /dev/mapper/cl-home  34418260   32944  34385316   1% /home
> tmpfs                 5744132       0   5744132   0% /run/user/0
> meta                 60435328     128  60435200   1% /meta
> meta/meta            59968128    4992  59961088   1% /mnt/meta
> [root at fb-lts-mds0 ~]# ls /proc/fs/lustre/mdt/test-MDT0000/
> async_commit_count     hash_stats               identity_upcall       num_exports         sync_count
> commit_on_sharing      hsm                      instance              recovery_status     sync_lock_cancel
> enable_remote_dir      hsm_control              ir_factor             recovery_time_hard  uuid
> enable_remote_dir_gid  identity_acquire_expire  job_cleanup_interval  recovery_time_soft
> evict_client           identity_expire          job_stats             rename_stats
> evict_tgt_nids         identity_flush           md_stats              root_squash
> exports                identity_info            nosquash_nids         site_stats

Also, there's no /proc/fs/lustre/health_check

> [root at fb-lts-mds0 ~]# ls /proc/fs/lustre/
> fld   llite  lod  lwp  mdd  mdt  mgs      osc      osp  seq
> ldlm  lmv    lov  mdc  mds  mgc  nodemap  osd-zfs  qmt  sptlrpc






More information about the lustre-discuss mailing list