[lustre-discuss] Enable multipath for existing Lustre OST with ZFS backend

Tung-Han Hsieh thhsieh at twcp1.phys.ntu.edu.tw
Thu May 9 12:25:52 PDT 2019


Greetings,

Recently we have a new storage device. It has dual RAID controllers
with two fibre connections to the file server which map the LUM of
the storage to the server:

# lsscsi -g
[5:0:0:0]    disk    IFT      DS 1000 Series   661J  /dev/sdb   /dev/sg4
[6:0:0:0]    disk    IFT      DS 1000 Series   661J  /dev/sdc   /dev/sg6

# /lib/udev/scsi_id -g -u /dev/sdb
3600d02310009ff8750249f7e31c5fd86

# /lib/udev/scsi_id -g -u /dev/sdc
3600d02310009ff8750249f7e31c5fd86

So /dev/sdb and /dev/sdc are actually the same LUM of the storage.

We have created the Lustre OST with ZFS backend on /dev/sdb:

# mkfs.lustre --ost --fsname chome --mgsnode=<host> --index=0 \
              --backfstype=zfs chome_ost/ost /dev/sdb

It works fine. But soon after that, I was told that I should setup
multipath to take the advantage of dual fibre channel for load
balance and HA. I am wondering whether it is too late or not because
we already have data of Lustre file system running on it.

I read the documents of multipath. It seems that after setting
multipath, both /dev/sdb and /dev/sdc are re-mapped to, say,
/dev/mapper/mpath0. The existing data is probably not affacted.
What we need to do is just to replace the device name /dev/sdb
by /dev/mapper/mpath0 (please correct me if I am wrong). So the
problem seems leading to ZFS. Now my OST pool "chome_ost/ost" was
created on /dev/sdb. Could we replace the pool device name to
/dev/mapper/mpath0 ?

Thanks very much for you suggestions in advance :)

Best Regards,

T.H.Hsieh


More information about the lustre-discuss mailing list