[lustre-discuss] Lustre optimize for spares data files ?
thhsieh at twcp1.phys.ntu.edu.tw
Wed Sep 9 00:49:09 PDT 2020
Thank you very much for your prompt reply.
I have additional questions. Since we have a large Lustre file
system pool, some are OST with ldiskfs backend, and some are OST
with ZFS backend. Can I just enable compression on the ZFS backend
which is going to be joined into the Lustre file system without
affecting the remaining parts ?
If my OST with ZFS has run for a while without compression enabled,
could I enable it without hurting the existing data ?
Thank you very much.
On Wed, Sep 09, 2020 at 08:32:30AM +0200, Robert Redl wrote:
> Dear Tung-Han,
> you can use Lustre with ZFS backend and enabled compression. That has
> the effect you are looking for and works very well.
> Am 09.09.20 um 05:13 schrieb Tung-Han Hsieh:
> > Dear All,
> > I would like to ask whether Lustre file system has implemented the
> > function to optimize for large sparse data files ?
> > For example, a 3GB data file but with more than 80% bytes zero, can
> > Lustre file system optimize the storage not actually taking the whole
> > 3GB of disk space ?
> > I know that some file systems (e.g., ZFS) has this function. If Lustre
> > does not have it, is there a roadmap to implement it in the future ?
> > Thanks for your reply in advance.
> > Best Regards,
> > T.H.Hsieh
> > _______________________________________________
> > lustre-discuss mailing list
> > lustre-discuss at lists.lustre.org
> > http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
> lustre-discuss mailing list
> lustre-discuss at lists.lustre.org
More information about the lustre-discuss