[lustre-discuss] Is there aceiling of lustre filesystem a client can mount

肖正刚 guru.novice at gmail.com
Thu Jul 16 01:35:26 PDT 2020


Hi, Mark Hahn

Very appreciate for your detailed reply.
And sorry for the ambiguous description.
For some reasons, we decided not to expand on the lustre filesystem already
exists; so  what I want to know is the number of lustre filesystems that a
client can mount on the same time .

Best regards.

Mark Hahn <hahn at mcmaster.ca> 于2020年7月16日周四 下午3:00写道:

> > On Jul 15, 2020, at 12:29 AM, ??? <guru.novice at gmail.com> wrote:
> >> Is there a ceiling for a Lustre filesystem that can be mounted in a
> cluster?
>
> It is very high, as Andreas said.
>
> >> If so, what's the number?
>
> The following contains specific limits:
>
>
> https://build.whamcloud.com/job/lustre-manual//lastSuccessfulBuild/artifact/lustre_manual.xhtml#idm140436304680016
>
> You'll notice that you must assume some aspects of configuration, such as
> the
> size and number of your OSTs.  I see OSTs in the range of 75-400TB (and OST
> counts between 58 and 187).
>
> >> If not, how much is proper?
>
> Lustre is designed to scale.  So a config with a small number of OSTs,
> on very few OSSes doesn't make that much sense.  OSTs are pretty much
> expected to be decent-sized RAIDs.  There would be tradeoffs among cost-
> efficient disk sizes (maybe 16T today) and RAID overhead (usually N+2),
> and how that trades off with bandwidth (HBA and OSS network).
>
> >> Does mount multiple filesystems  can affect the stability of each file
> system or cause other problems?
>
> My experience is that the main factor in reliability is device count,
> rather than how the devices are organized.  For instance, if you
> have more OSSes, you may get linearly nicer performance, but
> you also increase your chance of having components crash or fail.
>
> The main reason for separate filesystems is usually that the MDS
> (maybe MTD) can be a bottleneck.  But you can scale MDSes, instead.
>
> regards, mark hahn.
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20200716/ea86c981/attachment.html>


More information about the lustre-discuss mailing list