[lustre-discuss] how homogenous should a lustre cluster be?

White, Cliff cliff.white at intel.com
Mon Mar 20 10:58:44 PDT 2017

Comments inlne.

From: lustre-discuss <lustre-discuss-bounces at lists.lustre.org<mailto:lustre-discuss-bounces at lists.lustre.org>> on behalf of "E.S. Rosenberg" <esr+lustre at mail.hebrew.edu<mailto:esr+lustre at mail.hebrew.edu>>
Date: Monday, March 20, 2017 at 10:19 AM
To: "lustre-discuss at lists.lustre.org<mailto:lustre-discuss at lists.lustre.org>" <lustre-discuss at lists.lustre.org<mailto:lustre-discuss at lists.lustre.org>>
Subject: [lustre-discuss] how homogenous should a lustre cluster be?

Dear all,

>How homogenous/not homogenous are your clusters?

>At the moment all our OSS are identical and running lustrefs (lustre 2.8.0), now that I am taking one OSS offline for hardware maintenance I started wondering if I can bring it back as a ZFS OSS or would that make my lustre blow up?

Two things - 1) If you are re-making or replacing an existing OSS you need to do it properly, or there will be problems. See the Lustre Manual for the process.

2) We test with mixed clusters all the time. No issues at all with making one OSS ZFS and other OSS ldiskfs, no issues with multiple hardware types in a cluster.
Depending on the mix of hardware, you may wish to implement striping policies to avoid performance and space utilization issues. For example, if you stripe across multiple OST, and one OST is older, slower hardware the overall IO speed will be limited by your slowest OST. OST pools are very useful for this.

>Also in a more general sense do other people have lustre clusters with OSS/OSTs that are different hardware generations etc.?
There are many customers with multiple hardware generations in a cluster. Uses tend to vary with the site/use. Typically when people add a big pile of new OSS, they put the new gear up as a separate filesystem or pool to simplify the performance/space situation.


Hope this helps

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20170320/51d3c635/attachment.htm>

More information about the lustre-discuss mailing list