[Lustre-discuss] SAN, shared storage, iscsi using lustre?

Alex linux at vfemail.net
Wed Aug 13 01:55:46 PDT 2008


Hello Brian,

Thanks for your prompt reply... See my comments inline..

> Right.  So you have 8 iscsi disks.

Yes, let simplify our test environment.
- i have 2 lvs routers (one active router and one backup router used for 
failover) to balance connections through our servers located behind them.
- behind lvs, i have a cluster with 3 servers (let say they are web servers 
for simplicity). All web servers are serving the same content from a shared 
storage volume mounted as document root on all.

Till here design is unchangeable.

As I said in one of my past email, i used GFS on top of a shared storage 
logical volume. I give up because a can't use raid to group our iscsi disks 
-> i have a single point of failure design (if one or more iscsi 
disks/computers are down, shared volume become unusable).

Goal: replace GFS and create a non SPOF shared storage using other cluster 
file system -> let say lustre

What we have in adition to above:
- other N=8 computers (ore more). N will be what it need to be and can be 
increased as needed. Nothing imposed. In my example, i said that all N 
computers are exporting via iscsi their block devices (one block/computer) so 
on ALL our websevers we have visible and available all 8 iscsi disks to build 
a shared storage volume (like a SAN). Doesn't mean that is a must that all of 
them to export disks. Part of them can achive other functions like MDS, or 
perform other function if needed. Also, we can add more computers in above 
schema, as you will suggest.

Using GFS, i can't use raid over block devices (my iscsi disks) forming some 
md devices (md0, md1, etc), unify them using lvm and run GFS on top of 
resulted logical volume. That's the problem with GFS.

> These 3 servers are still unclear to me.  What do you see their function
> as being?  Would they be the Lustre filesystem servers to which Lustre
> clients go to get access to the shared filesystem composed of the 8
> iscsi disks?

These 3 servers should be definitely our www servers. I don't know if can be 
considered part of lustre... They should be able to access simultaneous, a 
shared storage BUILD BY LUSTRE using our iscsi disks. Reading lustre faq, is 
still unclear for me who are lustre clients. My feeling is telling me, that 
our 3 webservers will considered clients by lustre. Correct? Maybe now, 
because you have more information, you can tell me the way to go:
- how may more machines i need
- how to group and their roles (which will be MDS, which will be OSSes, which 
will be clients, etc)
- what i have to do to unify all iscsi disks in order to have non SPOF
- which machine(s) will be resposible to agregate our iscsi disks?
- will be ok to group our 8 iscsi disk in two 4 paired software raid (raid6) 
arrays (md0, md1), form on top another raid1 (let say md2), and on top of md2 
to use lvm? How is better to group/agregate our iscsi disks (which raid 
level)?
- how can be accessed resulted logical volume by our webservers?

> > This is not clear at all... Generally speaking ext3 is a local file
> > system (used on one computer). Reading FAQ, didn't find an answer, so i
> > asked here...
>
> Right.  It is in fact too much information for the Lustre beginner.  You
> should just be told that Lustre operates on and manages the block
> device.  That it does so through ext3 only serves to confuse the Lustre
> beginner.  Later when you have a better grasp on the architecture it
> might be worthwhile understanding that each Lustre server does it's
> management of the block device via ext3.  So please, don't worry about
> the traditional uses of ext3 and confuse it's limitations with Lustre.
> Lustre simply didn't want to invent a new on-disk management library and
> used ext3 for it.

Ok, sounds good. I believe you.

Regards,
Alx



More information about the lustre-discuss mailing list