[lustre-discuss] Expanding a zfsonlinux OST pool
Bob Ball
ball at umich.edu
Tue Nov 24 05:25:38 PST 2015
Thanks much for your reply, Olaf.
Below for the answers.
On 11/23/2015 4:35 PM, Faaland, Olaf P. wrote:
> Hello Bob,
>
> We did something similar - our MDS's used zpools based on spinning
> disks in JBODs and we switched to SSDs without bringing the filesystem
> down, using ZFS to replicate data. It worked great for us.
>
> How are your pools organized (ie what does zpool status show)? There
> might be options that are more or less risky, or take more or less
> time, depending on how zfs is using the disks.
This is a typical pool, which consists of disks on Dell MD1000 shelves,
each a single-disk raid-0 to make the JBOD group, then assembled via pci
address into the zpools of 10 disks each.
# zpool status ost-001
pool: ost-001
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
ost-001 ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
pci-0000:08:00.0-scsi-0:2:0:0 ONLINE 0 0 0
pci-0000:08:00.0-scsi-0:2:1:0 ONLINE 0 0 0
pci-0000:08:00.0-scsi-0:2:2:0 ONLINE 0 0 0
pci-0000:08:00.0-scsi-0:2:3:0 ONLINE 0 0 0
pci-0000:08:00.0-scsi-0:2:4:0 ONLINE 0 0 0
pci-0000:08:00.0-scsi-0:2:15:0 ONLINE 0 0 0
pci-0000:08:00.0-scsi-0:2:16:0 ONLINE 0 0 0
pci-0000:08:00.0-scsi-0:2:17:0 ONLINE 0 0 0
pci-0000:08:00.0-scsi-0:2:18:0 ONLINE 0 0 0
pci-0000:08:00.0-scsi-0:2:19:0 ONLINE 0 0 0
errors: No known data errors
Filesystem Size Used Avail Use% Mounted on
ost-001/ost0024 5.4T 3.4T 2.0T 64% /mnt/ost-001
>
> Also, how often are disks failing and how long does a replacement take
> to resilver, with your current disks?
These are old systems, and the underlying WD 750GB disks are going at an
average of 1 or 2 per week, with some 270 disks running this way. Some
shelves have newer larger disks, and those disks are giving us no
issues. We have a number of bigger, newer disk spares that we wanted to
swap in, giving us more of the 750GB units as spares.
Resilvering typically takes 6hrs or so these days.
bob
>
> -Olaf
>
> ------------------------------------------------------------------------
> *From:* Bob Ball [ball at umich.edu]
> *Sent:* Monday, November 23, 2015 12:22 PM
> *To:* Faaland, Olaf P.; Morrone, Chris
> *Cc:* Bob Ball
> *Subject:* Expanding a zfsonlinux OST pool
>
> Hi,
>
> We have some zfsonlinux pools in use with Lustre 2.7 that use some
> older disks, and we are rapidly running out of spares for those. What
> we would _like_ to do, if possible, is replace all of those 750GB
> disks in an OST, one at a time with re-silver between, with 1TB disks,
> then expand the OST when the last is complete to utilize the larger
> space and the more reliable disks.
>
> Is this going to work? One of us here found the following:
>
> According to the Oracle docs, a pool can autoexpand if you set it to
> do so. I think the default must be off because the one I checked is
> off (but that does indicate support of the feature in the linux
> release also).
>
> http://docs.oracle.com/cd/E19253-01/819-5461/githb/index.html
>
> [root at umdist02 ~]# zpool get autoexpand ost-006
> NAME PROPERTY VALUE SOURCE
> ost-006 autoexpand off default
>
> We are using zfsonlinux version 0.6.4.2. Can we follow the procedures outlined in the oracle doc using zfsonlinux?
>
> I guess my initial question assumed the expansion would not happen until the last disk is added and re-silvered, but the document indicates this is not really necessary?
>
> Thanks,
> bob
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20151124/b1d5fc7b/attachment.htm>
More information about the lustre-discuss
mailing list