[Lustre-discuss] how the lustre distribute data among disks within one OST
Christopher J. Morrone
morrone2 at llnl.gov
Thu Jun 13 14:54:43 PDT 2013
I think you may be confused about what a stripe is in Lustre. If there
are only 2 OST, then you can only stripe a file across 2.
Or maybe I don't understand your terminology. I don't know what you
mean by "0,4" and "0,2".
On 06/13/2013 02:38 PM, Jaln wrote:
> if I have 6 stripes, 2 OST, using round-robin striping,
> stripe 0,2,4 will be on OST0,
> stripe 1,3,5 will be on OST1,
> Do you guys have any idea about what will be the difference of accessing
> stripe 0,4 vs stripe 0,2?
> stripe 0, 2 seems to be closer than 0,4, or the lustre will do
> some intelligent work?
>
> Jaln
>
>
> On Thu, Jun 13, 2013 at 10:22 AM, Christopher J. Morrone
> <morrone2 at llnl.gov <mailto:morrone2 at llnl.gov>> wrote:
>
> On 06/13/2013 05:19 AM, E.S. Rosenberg wrote:
> > On Thu, Jun 13, 2013 at 3:09 AM, Christopher J. Morrone
> > <morrone2 at llnl.gov <mailto:morrone2 at llnl.gov>> wrote:
> >> Lustre does not manage the individual disks. I sits on top of a
> >> filesystem, either ldiskfs(basically ext4) or zfs (as of Lustre
> 2.4).
> > Is ZFS the recommended fs, or just an option?
> > Doesn't ZFS suffer major performance drawbacks on linux due to it
> > living in userspace?
> > Thanks,
> > Eli
>
> LLNL (Brian Behlendorf) ported ZFS natively to Linux. We are not using
> the FUSE (userspace) version. You can find it at:
>
> http://zfsonlinux.org
>
> ZFS is one of the two backend filesystem options for Lustre, as of
> Lustre 2.4. 2.4 is the first Lustre release that fully supports using
> ZFS. Here at LLNL we are using it on our newest, and largest at 55PB,
> filesystem.
>
> Chris
>
> _______________________________________________
> Lustre-discuss mailing list
> Lustre-discuss at lists.lustre.org <mailto:Lustre-discuss at lists.lustre.org>
> http://lists.lustre.org/mailman/listinfo/lustre-discuss
>
>
>
>
> --
>
> Genius only means hard-working all one's life
>
More information about the lustre-discuss
mailing list