[Lustre-discuss] small/inexpensive cluster design

Ms. Megan Larko dobsonunit at gmail.com
Thu Apr 21 13:51:21 PDT 2011


Greetings,

I had been a part of a team that has done this twice.  Once at NASA
Goddard Space Flight Center Hydrological Sciences Branch and one more
time at the Center for Research on Environment & Water.   Both times
were successful experiences I thought.

We used commercial off-the-shelf PC hardware and managed switches to
build a beowulf-style cluster consisting of compute nodes, OSS and MDS
nodes.   The OSS and the MGS/MDS units were separate as per the
recommendation of the Lustre team.  The back-end storage OST units
were 4U boxes containing sATA disks connected to the OST via CX4 (I
think) cables.  We used Perc6/i RAID and the corresponding MegaCLI64
s/w tool  on the OSS units to manage the disks within.

The OS was Red Hat-based CentOS 4 and upgraded before I left to CentOS
5.5.  The OST disks were formatted in the Lustre Cluster file system.

We were able to successfully export the Lustre mount-points via NFS
from the main client box.

We used the data on the Lustre file system to produce and display
Earth science images on an ordinary web interface (using a combination
of IDL proprietary imaging software and the freely available GrADS
imaging software from IGES).  We chose Lustre cluster files system for
the project because of its price point (Free/Open-Source -- FOSS) and
the fact that it performed better for our purposes than GFS and our
test of the, back then early, glustre.

Just a data point for you.

megan



More information about the lustre-discuss mailing list