[lustre-discuss] design to enable kernel updates

Ben Evans bevans at cray.com
Mon Feb 6 12:22:17 PST 2017


It's certainly possible.  When I've done that sort of thing, you upgrade
the OS on all the servers first, boot half of them (the A side) to the new
image, all the targets will fail over to the B servers.  Once the A side
is up, reboot the B half to the new OS.  Finally, do a failback to the
"normal" running state.

At least when I've done it, you'll want to do the failovers manually so
the HA infrastructure doesn't surprise you for any reason.

-Ben

On 2/6/17, 2:54 PM, "lustre-discuss on behalf of Brian Andrus"
<lustre-discuss-bounces at lists.lustre.org on behalf of toomuchit at gmail.com>
wrote:

>All,
>
>I have been contemplating how lustre could be configured such that I
>could update the kernel on each server without downtime.
>
>It seems this is _almost_ possible when you have a san system so you
>have failover for OSTs and MDTs. BUT the MGS/MGT seems to be the
>problematic one, since rebooting that seems cause downtime that cannot
>be avoided.
>
>If you have a system where the disks are physically part of the OSS
>hardware, you are out of luck. The hypothetical scenario I am using is
>if someone had a VM that was a qcow image on a lustre mount (basically
>an active, open file being read/written to continuously). How could
>lustre be built to ensure anyone on the VM would not notice a kernel
>upgrade to the underlying lustre servers.
>
>
>Could such a setup be done? It seems that would be a better use case for
>something like GPFS or Gluster, but being a die-hard lustre enthusiast,
>I want to at least show it could be done.
>
>
>Thanks in advance,
>
>Brian Andrus
>
>_______________________________________________
>lustre-discuss mailing list
>lustre-discuss at lists.lustre.org
>http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org



More information about the lustre-discuss mailing list