[Lustre-discuss] Harddisk Allocation
Chan Ching Yu, Patrick
cychan at clustertech.com
Fri Jun 14 16:53:22 PDT 2013
Hi,
I am considering the harddisk allocation for the Lustre storage system.
There are totally two Lustre IO servers, one acts as MDS/OSS, another acts as a pure OSS.
Both IO servers connect to a MD3200, which is daisy-chained by 4 MD1200.
Each MD storage system is equipped with 12 600GB harddisks.
I use ASCII-art to illustrate the storage system as follows:
(I also have the jpeg, but I dunno if jpeg is allowed in this mailling list, tell me if you can’t see the text-formated picture below)
MDS/OSS OSS
| |
________________
| | 1 4 7 10
| MD3200 | 2 5 8 11
|______________ | 3 6 9 12
________________
| | 1 4 7 10
| MD1200 | 2 5 8 11
|______________ | 3 6 9 12
________________
| | 1 4 7 10
| MD1200 | 2 5 8 11
|______________ | 3 6 9 12
________________
| | 1 4 7 10
| MD1200 | 2 5 8 11
|______________ | 3 6 9 12
________________
| | 1 4 7 10
| MD1200 | 2 5 8 11
|______________ | 3 6 9 12
This is my plan of harddisk allocation:
Harddisk 1 and 7 of MD3200 form a RAID-1 disk group,
this disk group has multiple virtual disks, one of them is the MGS, others are MDT1, MDT2, MDT3....etc.
Harddisk 2,3,4,5,6 of MD3200/MD1200 form a RAID-5 disk group,
this disk group only has one virtual disk, the virtual disk is the OSS
Harddisk 8,9,10,11,12 of MD3200/MD1200 form a RAID-5 disk group,
this disk group only has one virtual disk, the virtual disk is the OSS
Hardisk 1 and 7 of all MD1200 are unused (or make them as hot spare)
Each OSS has 4 effective harddisks, segment size 256KB, so that the stripe size is 1MB.
The local harddisk of each IO server is very small, so I don’t intend to use them as MGS/MDT.
My plan above is considered based on (not in prioritized order):
a) Easy to remember
b) Performance
c) Utilize the available harddisks as much as possible
Do you have any better suggestion?
Thanks.
CY
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20130615/5036e7a1/attachment.htm>
More information about the lustre-discuss
mailing list