[lustre-discuss] [EXTERNAL] Re: Tuning for metadata performance

Carlson, Timothy S Timothy.Carlson at pnnl.gov
Tue Jan 5 09:06:06 PST 2021

The ZFS metadata performance always lagged behind ldiskfs.  While dated, a pretty graph of performance is here:


Don’t get me wrong, I love ZFS for OSS systems. You would have to pull the compression capabilities from my cold dead hands.


From: "Vicker, Darby J. (JSC-EG111)[Jacobs Technology, Inc.]" <darby.vicker-1 at nasa.gov>
Date: Tuesday, January 5, 2021 at 8:51 AM
To: Timothy Carlson <Timothy.Carlson at pnnl.gov>, "Lustre-discuss at lists.lustre.org" <lustre-discuss at lists.lustre.org>
Subject: Re: [EXTERNAL] Re: [lustre-discuss] Tuning for metadata performance

Yes, we've done that already.  Sorry, I should have posted all our module parameters.

[root at hpfs-fsl-mds0 lustre]# cat /etc/modprobe.d/lustre.conf
# Lustre modprobe configuration

options lnet networks=tcp0(enp4s0),o2ib0(ib1),o2ib1(ib0)
options ko2iblnd map_on_demand=32
options osd_zfs osd_txg_sync_delay_us=0 osd_object_sync_delay_us=0
options zfs zfs_txg_history=120 zfs_txg_timeout=15 zfs_prefetch_disable=1 zfs_txg_timeout=30 zfs_vdev_scheduler=deadline

[root at hpfs-fsl-mds0 lustre]#

Would you mind elaborating on why you think ZFS was a bad choice?  In general, we love the features it brings.

From: "Carlson, Timothy S" <Timothy.Carlson at pnnl.gov>
Date: Tuesday, January 5, 2021 at 9:21 AM
To: "Vicker, Darby J. (JSC-EG111)[Jacobs Technology, Inc.]" <darby.vicker-1 at nasa.gov>, "lustre-discuss at lists.lustre.org" <lustre-discuss at lists.lustre.org>
Subject: [EXTERNAL] Re: [lustre-discuss] Tuning for metadata performance

On the MDS, make sure you have turned off pre-fetch

echo 1 > /sys/module/zfs/parameters/zfs_prefetch_disable

On our ancient (similar vintage) file system this greatly reduced the load on the MDS so it didn’t come to a complete standstill when under pressure.  ZFS for the MDS was a horrible choice (I made the same mistake).


From: lustre-discuss <lustre-discuss-bounces at lists.lustre.org> on behalf of "Vicker, Darby J. (JSC-EG111)[Jacobs Technology, Inc.]" <darby.vicker-1 at nasa.gov>
Date: Tuesday, January 5, 2021 at 7:48 AM
To: "Lustre-discuss at lists.lustre.org" <lustre-discuss at lists.lustre.org>
Subject: [lustre-discuss] Tuning for metadata performance


I'm looking for some advice on tuning our existing lustre file system to achieve better metadata performance.  This file system is getting fairly old – its been in production for almost 4 years now.  The hardware and our existing tuning efforts can be found here.


The hardware is the same but we have upgraded the software stack a few times – now on CentOS 7.6, ZFS 0.7.9 and lustre 2.10.8.  We do plan to upgrade to the latest CentOS 7.x and either lustre 2.12 or 2.13 soon.  The MDS hardware isn't well-described in that thread so here are more details:

Chassis: Supermicro 2U Twin Server
Processor: 4 x Quad­Core Xeon Processor E5­2637 v2 3.50GHz (2 sockets/8 cores per node)
Memory: 16 x 16GB PC3­14900 1866MHz DDR3 ECC Registered DIMM (128GB per node)

External JBOD:
Chassis: 24x Hot­Swap 2.5" SAS ­ 12Gb/s SAS Dual Expander
Drives: 12 x 600GB SAS 3.0 12.0Gb/s 15000RPM ­ 2.5" ­ Seagate Enterprise Performance 15K HDD (512n)
Controller Card: LSI SAS 9300-8e SAS 12Gb/s PCIe 3.0 8-Port Host Bus Adapter

The above hardware and tuning served us well for a long time but the lab has grown, both in number of lustre clients (now up to ~200 ethernet clients and ~500 IB clients) and the number of users in the lab.  With the extra users have come different types of workloads.  Peviously, the file system was most used for workloads with a fairly small number of large files.  We now see workloads that include 100's of concurrent processes all doing mixed small and large file IO on a lot of files (e.g. each process clones a repo, compiles a code and runs a serial sim that writes a lot of data).

I recently ran the io500 tests and our LFS stats for MDEasy and MDHard are pretty bad, even when compared to the lowest MD stats on the current io500 list.  Our standard NFS server handily beats our LFS wrt MD performance.  So I'm hopeful that we can squeeze more MD performance out of our LFS.  Obviously, software tuning on the existing hardware would be preferred but we are open to hardware additions/upgrades if that would help (e.g. adding more MDS's).  There are a lot of tuning options in both ZFS and lustre so I'm hoping someone can point me in the right direction.  Are DNE and/or DoM expected to help?  I attended the SC20 Lustre BoF and it sounds like 2.13 has some metadata performance improvements, so just an upgrade might help.  We have dual MDS's now but for HA, not performance.  I'd hate to lose the HA aspect as we utilize it for failover quite a bit (maintenance, etc.) but it would probably be worth it if MD performance was significantly improved.  If I understand correctly, there is some overhead with DNE and performance suffers with just two MDS's with a benefit with 4 or more MDS's, correct?  So that wouldn't be a good option for us unless we add MDS's?  Would an upgrade to SSD or NVMe in our MDTs help?

I would greatly appreciate thoughts on the best path forward for making improvements.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20210105/c8c84d00/attachment-0001.html>

More information about the lustre-discuss mailing list