[lustre-discuss] replicated 3+ file system
Vlad Kopylov
vladkopy at gmail.com
Tue Nov 13 12:07:12 PST 2018
Thank you for clarification Andreas.
Looks like averyone is in only for network RAID functionality rather then
distributed file system part.
Though of course RAID is distributed in some sense. Sadly even those who
have replica functionality like ceph and gluster lack real distributed
part. Ceph's crush maps ignore client placement rendering it useless on
cluster reads, even though they have good chain write replication engine.
Gluster has choose-local and latency replica selection not ready - so for
example Facebook had to add write background replication functionality
(called halo) to make writes go to close replicas and distribute by nodes
themselves.
I am sure such functions will benefit everyone's favourite enterprise
clients unless they have everything on ssd and rdma.
Would love to see such functionality in Lustre!
On Tue, Nov 13, 2018, 11:19 AM Andreas Dilger <adilger at whamcloud.com wrote:
> On Nov 12, 2018, at 14:05, Vlad Kopylov <vladkopy at gmail.com> wrote:
> >
> > Hello,
> >
> > I am getting mixed google results on either Lustre supports full HA with
> all it's daemons for fs at this point.
> > Simply put I have 3 servers in 3 buildings with fronted apps working
> with replicated data set at each location.
> > Can I have Lustre replicate data through those 3 nodes and mount FS at
> each location. Preferably without excessive read traffic between locations
> as there is 1-2ms latency involved.
>
> While this is something that would be interesting to implement in Lustre,
> it isn't how Lustre is deployed today.
>
> In the 2.11 release, there is the ability to mirror files across OSTs, and
> you could install a separate MDT and OSTs at each site and have "local"
> subdirectories located on the local MDT, with an OST pool defined to create
> files on the local OST.
>
> That said, it would probably be a lot easier to just have 3 separate
> Lustre filesystems and use a higher-level tool to do resync between the
> sites.
>
> Cheers, Andreas
> ---
> Andreas Dilger
> Principal Lustre Architect
> Whamcloud
>
>
>
>
>
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20181113/8892973d/attachment.html>
More information about the lustre-discuss
mailing list