[lustre-discuss] Building Lustre - kmod-lustre-osd-zfs compatibility

Ross, Travis Travis.Ross at kla.com
Mon Oct 25 21:50:07 PDT 2021


Follow up.

It seems that mostly my issue revolves around liblnetconfig...

I'm watching the build process and during my last run I verified that all the libraries that are complaining about installing do exist during the build process:

[root at A2M13-MACHDFS002 .libs]# ls
liblnetconfig.a  liblnetconfig.la  liblnetconfig_la-cyaml.o  liblnetconfig.lai  liblnetconfig_la-liblnetconfig_lnd.o  liblnetconfig_la-liblnetconfig_netlink.o  liblnetconfig_la-liblnetconfig.o  liblnetconfig_la-liblnetconfig_udsp.o  liblnetconfig.so  liblnetconfig.so.4  liblnetconfig.so.4.0.0

Something I'm also trying to identify is what all the configure options are, and what do they do?

I've noticed there's a -enable-shared and also a -enable-static and trying to understand what's the difference between these.
Also I'm seeing these lines in the lsutre.spec which seem to dictate which versions of these files to use, except now I've tried building with both -enable-shared and -enable-static with change in behaviour either way with my installation.

Also - I don't know enough about this... but why would it delete anything in here?

rm -f $RPM_BUILD_ROOT%{_libdir}/liblnetconfig.la
%if %{with static}
echo '%attr(-, root, root) %{_libdir}/liblnetconfig.a' >>lustre.files
%endif
%if %{with shared}
echo '%attr(-, root, root) %{_libdir}/liblnetconfig.so' >>lustre.files
echo '%attr(-, root, root) %{_libdir}/liblnetconfig.so.*' >>lustre.files
%endif
Thanks,
-Travis

From: Ross, Travis
Sent: Monday, October 25, 2021 10:35 PM
To: lustre-discuss at lists.lustre.org
Subject: Building Lustre - kmod-lustre-osd-zfs compatibility

I know!  Another "Building Lustre - Unsuccessfully" Thread!

I'm fairly new to building out and installing my own home grown Lustre, and so far I've been having some issues with my final deployment.

So far I have gone through and compiled the Lustre Kernel Patch, Installed MOFED w/ --add-kernel-support -kmp, compiled zfs for the Lustre Kernel Patch, then Compiled and built Lustre based on all of the above.  Through all the compilations I'm not getting any glaringly obvious errors or failures that would indicate why when I attempt to install, I'm getting the traditional ksym dependency errors.

I want to reach out to see if maybe I'm being too aggressive with my product versions (latest Master from GIT) and if I should step back and "decide" on a particular version of all these packages, and then - does anyone have any recommendations on a nice platform to "standardize" on.  Versions of ZFS+Lustre that play nicely together?

I'm not sure if attachments are coming through, if so - I've attached my yum output from installing my *.rpm's for Lustre, as well as the "Make" output from my

Environment Details:

Patched Kernel:
4.18.0-240.22.1.el8_lustre.x86_64

MLNX:
mlnx-ofa_kernel-5.4-OFED.5.4.1.0.3.1.rhel8u3.x86_64

ZFS:
zfs-2.1.99-485_gec64fdb93.el8.x86_64.rpm

LUSTRE:
lustre-2.14.55_43_g6a08df2-1.el8.x86_64.rpm


Also before I sent this I also attempted to build the following environment unsuccessfully:


Patched Kernel:
4.18.0-240.22.1.el8_lustre.x86_64

MLNX:
mlnx-ofa_kernel-5.4-OFED.5.4.1.0.3.1.rhel8u3.x86_64

ZFS:
zfs-2.0.6-1.el8.x86_64

LUSTRE:
lustre-2.14.55_43_g6a08df2-1.el8.x86_64.rpm


Also my configure syntax for reference:

MLNX:
./mlnxofedinstall --add-kernel-support --skip-repo -kmp

ZFS:
./configure --with-linux=/lib/modules/4.18.0-240.22.1.el8_lustre.x86_64/source --with-linux-obj=/lib/modules/4.18.0-240.22.1.el8_lustre.x86_64/source

LUSTRE:
./configure --enable-server  --disable-ldiskfs   --with-linux=/usr/src/kernels/4.18.0-240.22.1.el8_lustre.x86_64   --with-linux-obj=/usr/src/kernels/4.18.0-240.22.1.el8_lustre.x86_64   --with-o2ib=/usr/src/ofa_kernel/default   --with-zfs=/usr/src/zfs-2.0.6  --with-spl=/usr/src/spl-2.0.6

Any potential compatibility you could recommend would be greatly appreciated.  Should I just scale back entirely to 2.12 on RHEL7.9?  I was hoping to be closer to latest on a new cluster, but maybe it would be better to scale back to a more stable platform entirely, or have I missed something important and I would experience similar issues running on an older platform?

Thanks,
-Travis
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20211026/1747b8b6/attachment.html>


More information about the lustre-discuss mailing list