[lustre-discuss] Interesting disk usage of tar of many small files on zfs-based Lustre 2.10

Nathan R.M. Crawford nrcrawfo at uci.edu
Fri Aug 4 15:13:04 PDT 2017


Hi Alex,

On Thu, Aug 3, 2017 at 6:53 PM, Alexander I Kulyavtsev <aik at fnal.gov> wrote:

> Lustre IO size is 1MB; you have zfs record 4MB.
> Do you see IO rate change when tar record size set to 4 MB (tar -b 8192) ?
>

  I'm actually using 4M IO, which gets picked up automatically by the OSTs
on 4M ZFS datasets. I tried -b 8192, but it was slightly slower than -b
2048, which is significantly faster (about 75% wall time) than the default
-b 20. I do have the directory's stripe_size=1M and stripe_count=1 because
that had the best overall performance for our typical workloads. I'm sure
there is still LOTS of room for improvement by tweaking parameters, which
I'll get to as soon as all the other fires are put out :)


>
> How many data disks do you have at raidz2?
>

   There are three OSTs on a single OSS. Each target is on a dataset in a
pool of 30 SAS disks arranged as 3 10-disk raidz2 vdevs. We are
simultaneously running a BeeGFS system on the same hardware, so each zpool
has a Lustre dataset and a BeeGFS dataset. I noticed the tar file disk
space discrepancy during benchmark comparisons of the two file systems.


> zfs may write few extra empty blocks to improve defragmentation; IIRC this
> patch is on by default in zfs 0.7 to improve io rates for some disks:
> https://github.com/zfsonlinux/zfs/pull/5931
>
>   It should have that patch (using the 0.7.0 rpms from the non-testing
kmod el7.3 repository), and I also have zfs_vdev_aggregation_limit=16MiB,
so that should not be the issue.


>  If I understand it correctly, for very small files (untarred) there will
> be overhead to pad file to record size, and for extra padding to P+1
> records (=P extra) and for parity records (+P). Plus metadata size for the
> lustre ost object. For raidz2 with P=2 it is factor 5x or more.
>
> I am absolutely UNsurprised that the unpacked files take a lot of space.
>From the nominal size of the uncompressed tar, they average 8.6K/file.
Unarchived, but compressed at the zfs level with lz4, they average
11.4K/file. If an 8K file can be compressed to less than 4K, storing it
with two parity blocks should take minimum 12K. 11.4K seems a reasonable
average if there are any empty files in there.

We have the metadata target on SSDs with ashift=9. This works out to
0.4K/file on the MDT. We also have dnodesize=auto on the MDT, so I am
waiting for the zfs 0.7.1 rpms to include
https://github.com/zfsonlinux/zfs/pull/6439 before opening the file system
up to non-scratch usage.

-Nate



> Alex.
>
> On Aug 3, 2017, at 7:28 PM, Nathan R.M. Crawford <nrcrawfo at uci.edu> wrote:
>
> Off-list, it was suggested that tar's default 10K blocking may be the
> cause. I increased it to 1MiB using "tar -b 2048 ...", which seems to
> result in the expected 9.3 GiB disk usage. It probably makes archives
> incompatible with very old versions of tar, but meh.
>
> -Nate
>
> On Thu, Aug 3, 2017 at 3:07 PM, Nathan R.M. Crawford <nrcrawfo at uci.edu> wr
> ote:
>
>>   In testing how to cope with naive users generating millions of tiny
>> files, I noticed some surprising (to me) behavior on a lustre 2.10/ZFS
>> 0.7.0 system.
>>
>>   The test directory (based on actual user data) contains about 4 million
>> files (avg size 8.6K) in three subdirectories. Making tar files of each
>> subdirectory gives the total nominal size of 34GB, and using "zfs list",
>> the tar files took up 33GB on disk.
>>
>>   The initially surprising part is that making copies of the tar files
>> only adds 9GB to the disk usage. I suspect that the creation of the tar
>> files is as a bunch of tiny appendings, and with a raidz2 on ashift=12
>> disks (4MB max recordsize), there is some overhead/wasted space on each
>> mini-write. The copies of the tar files, however, could be made as a single
>> write that avoided the overhead and probably allowed the lz4 compression to
>> be more efficient.
>>
>>   Are there any tricks or obscure tar options that make archiving
>> millions of tiny files on a Lustre system avoid this? It is not a critical
>> issue, as taking a minute to copy the tar files is simple enough.
>>
>> -Nate
>>
>> --
>>
>> Dr. Nathan Crawford              nathan.crawford at uci.edu
>> Modeling Facility Director
>> Department of Chemistry
>> 1102 Natural Sciences II         Office: 2101 Natural Sciences II
>> University of California, Irvine  Phone: 949-824-4508 <(949)%20824-4508>
>> Irvine, CA 92697-2025, USA
>>
>>
>
>
> --
>
> Dr. Nathan Crawford              nathan.crawford at uci.edu
> Modeling Facility Director
> Department of Chemistry
> 1102 Natural Sciences II         Office: 2101 Natural Sciences II
> University of California, Irvine  Phone: 949-824-4508 <(949)%20824-4508>
> Irvine, CA 92697-2025, USA
>
> _______________________________________________
> lustre-discuss mailing list
> lustre-discuss at lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>
>
>


-- 

Dr. Nathan Crawford              nathan.crawford at uci.edu
Modeling Facility Director
Department of Chemistry
1102 Natural Sciences II         Office: 2101 Natural Sciences II
University of California, Irvine  Phone: 949-824-4508 <(949)%20824-4508>
Irvine, CA 92697-2025, USA
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20170804/d2c43d71/attachment.htm>


More information about the lustre-discuss mailing list