[lustre-discuss] separate SSD only filesystem including HDD

Patrick Farrell paf at cray.com
Tue Aug 28 05:54:33 PDT 2018

How are you measuring write speed?

From: lustre-discuss <lustre-discuss-bounces at lists.lustre.org> on behalf of Zeeshan Ali Shah <javaclinic at gmail.com>
Sent: Tuesday, August 28, 2018 1:30:03 AM
To: lustre-discuss at lists.lustre.org
Subject: [lustre-discuss] separate SSD only filesystem including HDD

Dear All, I recently deployed 10PB+ Lustre solution which is working fine. Recently for  genomic pipeline we acquired another racks with dedicated compute nodes with single 24-NVME SSD Servers/Rack .  Each SSD server connected to Compute Node via 100 G Omnipath.

Issue 1:  is that when I combined SSDs in stripe mode using zfs we  are not linearly scaling in terms of performance . for e..g single SSD write speed is 1.3GB/sec , adding 5 of those in stripe mode should give us atleast 1.3x5 or less , but we still get 1.3 GB out of those 5 SSD .

Issue 2: if we resolve issue #1, 2nd challenge is to allow 24 NVMEs to compute nodes distributed and parallel wise , NFS not an option .. tried glusterfs but due to its DHT it is slow..

I am thinking to add another Filesystem to our existing MDT and install OSTs/OSS over the NVME server.. mounting this specific ssd where needed. so basically we will end up having two filesystem (one with normal 10PB+ and 2nd with SSD)..

Does this sounds correct ?

any other advice please ..


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20180828/2526654a/attachment.html>

More information about the lustre-discuss mailing list