[lustre-discuss] Building Lustre - kmod-lustre-osd-zfs compatibility

Ross, Travis Travis.Ross at kla.com
Mon Oct 25 19:34:40 PDT 2021


I know!  Another "Building Lustre - Unsuccessfully" Thread!

I'm fairly new to building out and installing my own home grown Lustre, and so far I've been having some issues with my final deployment.

So far I have gone through and compiled the Lustre Kernel Patch, Installed MOFED w/ --add-kernel-support -kmp, compiled zfs for the Lustre Kernel Patch, then Compiled and built Lustre based on all of the above.  Through all the compilations I'm not getting any glaringly obvious errors or failures that would indicate why when I attempt to install, I'm getting the traditional ksym dependency errors.

I want to reach out to see if maybe I'm being too aggressive with my product versions (latest Master from GIT) and if I should step back and "decide" on a particular version of all these packages, and then - does anyone have any recommendations on a nice platform to "standardize" on.  Versions of ZFS+Lustre that play nicely together?

I'm not sure if attachments are coming through, if so - I've attached my yum output from installing my *.rpm's for Lustre, as well as the "Make" output from my

Environment Details:

Patched Kernel:
4.18.0-240.22.1.el8_lustre.x86_64

MLNX:
mlnx-ofa_kernel-5.4-OFED.5.4.1.0.3.1.rhel8u3.x86_64

ZFS:
zfs-2.1.99-485_gec64fdb93.el8.x86_64.rpm

LUSTRE:
lustre-2.14.55_43_g6a08df2-1.el8.x86_64.rpm


Also before I sent this I also attempted to build the following environment unsuccessfully:


Patched Kernel:
4.18.0-240.22.1.el8_lustre.x86_64

MLNX:
mlnx-ofa_kernel-5.4-OFED.5.4.1.0.3.1.rhel8u3.x86_64

ZFS:
zfs-2.0.6-1.el8.x86_64

LUSTRE:
lustre-2.14.55_43_g6a08df2-1.el8.x86_64.rpm


Also my configure syntax for reference:

MLNX:
./mlnxofedinstall --add-kernel-support --skip-repo -kmp

ZFS:
./configure --with-linux=/lib/modules/4.18.0-240.22.1.el8_lustre.x86_64/source --with-linux-obj=/lib/modules/4.18.0-240.22.1.el8_lustre.x86_64/source

LUSTRE:
./configure --enable-server  --disable-ldiskfs   --with-linux=/usr/src/kernels/4.18.0-240.22.1.el8_lustre.x86_64   --with-linux-obj=/usr/src/kernels/4.18.0-240.22.1.el8_lustre.x86_64   --with-o2ib=/usr/src/ofa_kernel/default   --with-zfs=/usr/src/zfs-2.0.6  --with-spl=/usr/src/spl-2.0.6

Any potential compatibility you could recommend would be greatly appreciated.  Should I just scale back entirely to 2.12 on RHEL7.9?  I was hoping to be closer to latest on a new cluster, but maybe it would be better to scale back to a more stable platform entirely, or have I missed something important and I would experience similar issues running on an older platform?

Thanks,
-Travis
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20211026/2166dc95/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: make_output_zfs
Type: application/octet-stream
Size: 877850 bytes
Desc: make_output_zfs
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20211026/2166dc95/attachment-0003.obj>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: make_output_lustre
Type: application/octet-stream
Size: 467169 bytes
Desc: make_output_lustre
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20211026/2166dc95/attachment-0004.obj>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: yum_output
Type: application/octet-stream
Size: 78895 bytes
Desc: yum_output
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20211026/2166dc95/attachment-0005.obj>


More information about the lustre-discuss mailing list