[lustre-discuss] separate SSD only filesystem including HDD

Zeeshan Ali Shah javaclinic at gmail.com
Tue Aug 28 07:51:49 PDT 2018


1) fio --name=randwrite --ioengine=libaio --iodepth=1 --rw=randwrite
--bs=4k --direct=0 --size=20G --numjobs=4 --runtime=240 --group_reporting

2) time cp x x2

3) and dd if=/dev/zero of=/ssd/d.data bs=10G count=4 iflag=fullblock

any other way to test this plz let me know

/Zee



On Tue, Aug 28, 2018 at 3:54 PM Patrick Farrell <paf at cray.com> wrote:

> How are you measuring write speed?
>
>
> ------------------------------
> *From:* lustre-discuss <lustre-discuss-bounces at lists.lustre.org> on
> behalf of Zeeshan Ali Shah <javaclinic at gmail.com>
> *Sent:* Tuesday, August 28, 2018 1:30:03 AM
> *To:* lustre-discuss at lists.lustre.org
> *Subject:* [lustre-discuss] separate SSD only filesystem including HDD
>
> Dear All, I recently deployed 10PB+ Lustre solution which is working fine.
> Recently for  genomic pipeline we acquired another racks with dedicated
> compute nodes with single 24-NVME SSD Servers/Rack .  Each SSD server
> connected to Compute Node via 100 G Omnipath.
>
> Issue 1:  is that when I combined SSDs in stripe mode using zfs we  are
> not linearly scaling in terms of performance . for e..g single SSD write
> speed is 1.3GB/sec , adding 5 of those in stripe mode should give us
> atleast 1.3x5 or less , but we still get 1.3 GB out of those 5 SSD .
>
> Issue 2: if we resolve issue #1, 2nd challenge is to allow 24 NVMEs to
> compute nodes distributed and parallel wise , NFS not an option .. tried
> glusterfs but due to its DHT it is slow..
>
> I am thinking to add another Filesystem to our existing MDT and install
> OSTs/OSS over the NVME server.. mounting this specific ssd where needed. so
> basically we will end up having two filesystem (one with normal 10PB+ and
> 2nd with SSD)..
>
> Does this sounds correct ?
>
> any other advice please ..
>
>
> /Zeeshan
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20180828/447cb9b6/attachment-0001.html>


More information about the lustre-discuss mailing list