[Lustre-discuss] HW experience

Aaron S. Knister aaron at iges.org
Wed Mar 26 16:33:11 PDT 2008


After I got the kinks work out, this setup FLIES, especially over infiniband. It's actually what I'm running for a 50tb lustre set up. I would strongly recommend either two 7 disk raid5s with a hot spare or one raid6, per MD1000.. I don't know what the performance difference is like. I also believe ldiskfs will now allow you to format a single partition larger than 8TB. If this is not the case, go with the two smaller raid5s. If you use LVM and split up a single physical partition into two virtual partitions...you're performance will hurt. 

Also, don't bother partitioning the raids. Use raw block devices (i.e. /dev/sdX). I've also seen significantly better performance out of RHEL5 than RHEL4 but that was with the perc5, so I can't speak for the perc6. The key (at least with the perc5s) can be found in this article- http://thias.marmotte.net/archives/2008/01/05/Dell-PERC5E-and-MD1000-performance-tweaks.html. It makes a WORLD of difference. Good luck! 

-Aaron 

oh and PS I'd also avoid putting more than 3 MD1000s per 1950 for bandwidth reasons. 

----- Original Message ----- 
From: "Martin Gasthuber" <martin.gasthuber at desy.de> 
To: "Lustre" <lustre-discuss at clusterfs.com> 
Sent: Wednesday, March 26, 2008 7:53:31 AM GMT -05:00 US/Canada Eastern 
Subject: [Lustre-discuss] HW experience 

Hi, 

we would like to establish a small Lustre instance and for the OST 
planning to use standard Dell PE1950 servers (2x QuadCore + 16 GB Ram) and 
for the disk a JBOD (MD1000) steered by the PE1950 internal Raid controller 
(Raid-6). Any experience (good or bad) with such a config ? 

thanxs, 
Martin 

_______________________________________________ 
Lustre-discuss mailing list 
Lustre-discuss at lists.lustre.org 
http://lists.lustre.org/mailman/listinfo/lustre-discuss 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20080326/ecc708d5/attachment.htm>


More information about the lustre-discuss mailing list