<div dir="ltr">Hi, Mark Hahn<div><br><div>Very appreciate for your detailed reply.</div><div>And sorry for the ambiguous description.</div><div></div><div>For some reasons, we decided not to expand on the lustre filesystem already exists; so what I want to know is the number of lustre filesystems that a client can mount on the same time .
</div><div><br></div><div>Best regards.</div></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">Mark Hahn <<a href="mailto:hahn@mcmaster.ca">hahn@mcmaster.ca</a>> 于2020年7月16日周四 下午3:00写道:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">> On Jul 15, 2020, at 12:29 AM, ??? <<a href="mailto:guru.novice@gmail.com" target="_blank">guru.novice@gmail.com</a>> wrote:<br>
>> Is there a ceiling for a Lustre filesystem that can be mounted in a cluster?<br>
<br>
It is very high, as Andreas said.<br>
<br>
>> If so, what's the number?<br>
<br>
The following contains specific limits:<br>
<br>
<a href="https://build.whamcloud.com/job/lustre-manual//lastSuccessfulBuild/artifact/lustre_manual.xhtml#idm140436304680016" rel="noreferrer" target="_blank">https://build.whamcloud.com/job/lustre-manual//lastSuccessfulBuild/artifact/lustre_manual.xhtml#idm140436304680016</a><br>
<br>
You'll notice that you must assume some aspects of configuration, such as the<br>
size and number of your OSTs. I see OSTs in the range of 75-400TB (and OST<br>
counts between 58 and 187).<br>
<br>
>> If not, how much is proper?<br>
<br>
Lustre is designed to scale. So a config with a small number of OSTs,<br>
on very few OSSes doesn't make that much sense. OSTs are pretty much<br>
expected to be decent-sized RAIDs. There would be tradeoffs among cost-<br>
efficient disk sizes (maybe 16T today) and RAID overhead (usually N+2),<br>
and how that trades off with bandwidth (HBA and OSS network).<br>
<br>
>> Does mount multiple filesystems can affect the stability of each file system or cause other problems?<br>
<br>
My experience is that the main factor in reliability is device count,<br>
rather than how the devices are organized. For instance, if you<br>
have more OSSes, you may get linearly nicer performance, but <br>
you also increase your chance of having components crash or fail.<br>
<br>
The main reason for separate filesystems is usually that the MDS<br>
(maybe MTD) can be a bottleneck. But you can scale MDSes, instead.<br>
<br>
regards, mark hahn.<br>
</blockquote></div>