[Lustre-discuss] SAN, shared storage, iscsi using lustre?

Brian J. Murrell Brian.Murrell at Sun.COM
Wed Aug 13 05:13:04 PDT 2008


On Wed, 2008-08-13 at 11:55 +0300, Alex wrote:
> Hello Brian,

Hi.

> Thanks for your prompt reply... See my comments inline..

NP.

> i have a cluster with 3 servers (let say they are web servers 
> for simplicity). All web servers are serving the same content from a shared 
> storage volume mounted as document root on all.

Ahhh.  Your 3 servers would in fact then be Lustre clients.  Given that
you have identified 3 Lustre clients and 8 "disks" you now need some
servers to be your Lustre servers.

> What we have in adition to above:
> - other N=8 computers (ore more). N will be what it need to be and can be 
> increased as needed.

Well, given that those are simply disks, you can/need to increase that
count only in so much as your bandwidth and capacity needs demand.

As an aside, it seems rather wasteful to dedicate a whole computer to
being nothing more than an iscsi disk exporter, so it's entirely
possible that I'm misunderstanding this aspect of it.  In any case, if
you do indeed have 1 disk in each of these N=8 computers exporting a
disk with iscsi, then so be it and each machine represents a "disk".

> Nothing imposed. In my example, i said that all N 
> computers are exporting via iscsi their block devices (one block/computer) so 
> on ALL our websevers we have visible and available all 8 iscsi disks to build 
> a shared storage volume (like a SAN).

Right.  You need to unravel this.  If you want to use Lustre you need to
make those disks/that SAN available to Lustre servers, not your web
servers (which will be Lustre clients).

>  Doesn't mean that is a must that all of 
> them to export disks. Part of them can achive other functions like MDS, or 
> perform other function if needed.

Not if you want to have redundancy.  If you want to use RAID to get
redundancy out of those iscsi disks then the machines exporting those
disks need to be dedicated to simply exporting the disks and you need to
introduce additional machines to take those exported block devices, make
RAID volumes out of them and then incorporate those RAID volumes into a
Lustre filesystem.  You can see why I think it seems wasteful to be
exporting these disks, 1 per machine as iscsi targets.

> Also, we can add more computers in above 
> schema, as you will suggest.

Well, you will need to add an MDS or two and 2 or more OSSes to achieve
redundancy.

> These 3 servers should be definitely our www servers. I don't know if can be 
> considered part of lustre...

Only Lustre clients then.

> Reading lustre faq, is 
> still unclear for me who are lustre clients.

Your 3 web servers would be the Lustre clients.

> - how may more machines i need

Well, I would say 3 minimum as per my previous plan.

> - how to group and their roles (which will be MDS, which will be OSSes, which 
> will be clients, etc)

Again, see my previous plan.  You could simplify a bit and use 4
machines, two acting as active/passive MDSes and two as active/active
OSSes.

> - what i have to do to unify all iscsi disks in order to have non SPOF

RAID them on the MDSes and OSSes.

> - will be ok to group our 8 iscsi disk in two 4 paired software raid (raid6) 
> arrays (md0, md1),

No.  Please see my previous e-mail about what you could do with 8 disks.

> form on top another raid1 (let say md2), and on top of md2 
> to use lvm?

You certainly could layer LVM between the RAID devices and Lustre, but
it doesn't seem necessary.

b.

-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 189 bytes
Desc: This is a digitally signed message part
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20080813/6a80a8e1/attachment.pgp>


More information about the lustre-discuss mailing list