[lustre-discuss] Trying to rebuild all Lustre rpms with zfs 0.6.4.2 -- and failing
ball at umich.edu
Wed Jul 22 10:11:47 PDT 2015
I'm looking for someone who can give me advice on a problem I am having
rebuilding the full set (server and client) of Lustre rpms, including
zfs support on the server side. The 2.7.0 rpms as distributed were
built with zfs 0.6.3, and do not work with 0.6.4.
So, I follow the directions here, fetching the current git source which
is something like 2.7.56:
I modified this as follows to include zfs:
Then, when I reach the stage of building the Lustre rpms, I prepare by
I install the kernel rpm, create the initramfs, and reboot.
This is where it all goes horribly wrong. Our systems are built with
LVM partitions, with the exception of the /boot partition on /dev/sda1.
grub.conf is seen there in sda1, the kernel choice is made, it starts
going. Then nothing. The console goes silent. It appears that the
kernel cannot access the LVM partitions. From everything I can see,
dracut knows about them, but I am at a loss on how to proceed.
/sbin/new-kernel-pkg --package kernel --mkinitrd --dracut --depmod
title Scientific Linux (22.214.171.1244.16.2.el6_lustre)
kernel /vmlinuz-126.96.36.1994.16.2.el6_lustre ro
root=/dev/mapper/vg0-lv_root rd_NO_LUKS rd_LVM_LV=vg0/lv_root
LANG=en_US.UTF-8 rd_NO_MD rhgb quiet selinux=0 rd_LVM_LV=vg0/lv_swap
SYSFONT=latarcyrheb-sun16 crashkernel=auto KEYBOARDTYPE=pc KEYTABLE=us
rd_NO_DM rhgb quiet printk.time=1 console=tty0 console=ttyS1,57600n8
Can anyone advise me on how to proceed with this? I'd really like to
get the upgraded zfs in use over the 0.6.3 that is running now.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the lustre-discuss