[lustre-discuss] stripe count recommendation, and proposal for auto-stripe tool

Dilger, Andreas andreas.dilger at intel.com
Thu Jun 9 13:30:31 PDT 2016

On Jun 9, 2016, at 10:45, Nathan Dauchy - NOAA Affiliate <nathan.dauchy at noaa.gov> wrote:
> Greetings All,
> After looking at this topic further, and discussions with a colleague at NASA, I could be convinced to be more aggressive in REstriping files wider with "lfs_migrate -A".  I would like to know if anyone has recent benchmark results or analysis to support or refute the following...
> When using lfs_migrate, each file is handled with a single process, so the multi-client writing problem identified by Patrick does not apply.  Furthermore, if a user is migrating files to a "hot" tier of storage, they presumably know how that data set will be used and should specify the stripe count based on future application read access pattern.  In other cases (such as capacity balancing), the performance of lfs_migrate is probably not critical, so the bottom line is that we should not auto-select stripe count based on *write* performance.
> I have searched around for metrics to show whether *read* performance tails off with number of stripes and/or clients at some point.  Also relevant would be data to define just how much the increased overhead of each stripe actually effects metadata operations (particularly "ls -l").  With those numbers, we could make a more informed decision about the algorithm to use for "lfs_migrate -A" in LU-8207.

There is definitely an increase in metadata overhead when a file has more stripes.  This is from the increase in the LOV EA size, which no longer fits into the MDT inode beyond 5-6 stripes (depends on other xattrs), and causes a significant non-linear increase in metadata overhead.  There is also increased overhead of fetching attributes from multiple OST objects for each file.

If files are accessed concurrently (read or write) then having more stripes far outweighs the cost of accessing the file the first time, since the initial inode access is measured in ms, while the full read or write of a large file might be minutes.

> * Some good data from my colleague at NASA is in http://people.nas.nasa.gov/~kolano/papers/hpdic13.pdf and shows stat operations clearly getting slower with stripe count, but I'm wondering if that might be outdated based on more recent MDS threading performance improvements.  That paper also shows multi-client read performance improving up to about 16 stripes, then leveling off.

The MDS threading improvement will not handle the overhead from OST stripe attributes.  If there was an expectation that many files would have larger stripe counts (more than 5-6) then it would make sense to format the MDT with a larger inode size ("-I 1024") so the LOV EA could fit into the core inode instead of an external xattr block.  However, if only a smaller number of files have many stripes then this larger inode size would be a tax on IO and cache space for the majority of inodes.

> * This paper shows single-client read performance degrading primarily after 16 or 32 stripes:
> https://cug.org/5-publications/proceedings_attendee_lists/CUG09CD/S09_Proceedings/pages/authors/11-15Wednesday/13A-Crosby/LCROSBY-PAPER.pdf
> * Another reference is at http://wiki.opensfs.org/MDS_SMP_Node_Affinity_FinalReport_wiki_version ...but it lacks metadata read operations that are the most critical after migrating existing data.  (There is no degrading of Opencreate and Unlink IOPs up to 4 stripes though.)

See also the ORNL/Intel PFL presentation at LUG or the OLCF Lustre Ecosystem Workshop.  It contains some benchmarks on metadata vs. IO performance for varying stripe counts.

> Therefore, the actual considerations on selecting stripe count when REstriping files are:
>   * OST capacity and load balancing (more stripes are always better?)

Though this introduces more points of failure for any particular file.
>   * Metadata performance primarily read ops. (progressively worse with more than ~4 stripes)
>   * Single-client read performance (degrades slightly with more stripes?)
>   * Multi-client read performance (more stripes are better up to a point, then performance degrades?)

I don't think performance degrades for multi-client reads with increasing stripe count?  I guess it depends whether the bottleneck is on the client or on the server.

> Possibly something like "stripe per GB up to 16 stripes, then stripe per 100 GB up to number of OSTs" is better than the "Log2()" algorithm after all?  Can we even do stripe per 0.5 GB?  What data is available to determine whether 100 GB is the right value, or should it be the 1% of smallest OST as already proposed for http://review.whamcloud.com/#/c/20552/ ?
> Thanks,
> Nathan
> On Wed, May 18, 2016 at 1:30 PM, Dilger, Andreas <andreas.dilger at intel.com> wrote:
>> On 2016/05/18, 11:22, "Nathan Dauchy - NOAA Affiliate" <nathan.dauchy at noaa.gov> wrote:
>>> Greetings All,
>>> I'm looking for your experience and perhaps some lively discussion regarding "best practices" for choosing a file stripe count.  The Lustre manual has good tips on "Choosing a Stripe Size", and in practice the default 1M rarely causes problems on our systems. Stripe Count on the other hand is far more difficult to chose a single value that is efficient for a general purpose and multi-use site-wide file system.
>>> Since there is the "increased overhead" of striping, and weather applications do unfortunately write MANY tiny files, we usually keep the filesystem default stripe count at 1.  Unfortunately, there are several users who then write very large and shared-access files with that default.  I would like to be able to tell them to restripe... but without digging into the specific application and access pattern it is hard to know what count to recommend.  Plus there is the "stripe these but not those" confusion... it is common for users to have a few very large data files and many small log or output image files in the SAME directory.
>> This is exactly what the ORNL "Progressive File Layout" (PFL) project is about.  Automatically increase the stripe size of a file as the size grows.  That will allow a single default layout to describe both small and large files, and go from e.g. 1 stripe to 8 stripes to 256 stripes as the size increases.
>>> What do you all recommend as a reasonable rule of thumb that works for "most" user's needs, where stripe count can be determined based only on static data attributes (such as file size)?  I have heard a "stripe per GB" idea, but some have said that escalates to too many stripes too fast.  ORNL has a knowledge base article that says use a stripe count of "File size / 100 GB", but does that make sense for smaller, non-DOE sites?  Would stripe count = Log2(size_in_GB)+1 be more generally reasonable?  For a 1 TB file, that actually works out to be similar to ORNL, only gets there more gradually:
>>>     https://www.olcf.ornl.gov/kb_articles/lustre-basics/#Stripe_Count
>> Using the log2() value seems reasonable.
>>> Ideally, I would like to have a tool to give the users and say "go restripe your directory with this command" and it will do the right thing in 90% of cases.  See the rough patch to lfs_migrate (included below) which should help explain what I'm thinking.  Probably there are more efficient ways of doing things, but I have tested it lightly and it works as a proof-of-concept.
>> I'd welcome this as a patch submitted to Gerrit.
>>> With a good programmatic rule of thumb, we (as a Lustre community!) can eventually work with application developers to embed the stripe count selection into their code and get things at least closer to right up front.  Even if trial and error is involved to find the optimal setting, at least the rule of thumb can be a _starting_point_ for the users, and they can tweak it from there based on application, model, scale, dataset, etc.
>>> Thinking farther down the road, with progressive file layout, what algorithm will be used as the default?
>> To be clear, the PFL implementation does not currently have an algorithmic layout, rather a series of thresholds based on file size that will select different layouts (initially stripe counts, but could be anything including stripe size, OST pools, etc).  The PFL size thresholds and stripe counts _could_ be set up (manually) as as a geometric series, but they can also be totally arbitrary if you want.
>>> If Lustre gets to the point where it can rebalance OST capacity behind the scenes, could it also make some intelligent choice about restriping very large files to spread out load and better balance capacity?  (Would that mean we need a bit set on the file to flag whether the stripe info was set specifically by the user or automatically by Lustre tools or it was just using the system default?)  Can the filesystem track concurrent access to a file, and perhaps migrate the file and adjust stripe count based on number of active clients?
>> I think this would be an interesting task for RobinHood, since it already has much of this information.  It could find large files with low stripe counts and restripe them during OST rebalancing.
>>> I appreciate any and all suggestions, clarifying questions, heckles, etc.  I know this is a lot of questions, and I certainly don't expect definitive answers on all of them, but I hope it is at least food for thought and discussion! :)
>> One last comment on the patch below:
>>> --- lfs_migrate-2.7.1 2016-05-13 12:46:06.828032000 +0000
>>> +++ lfs_migrate.auto-count 2016-05-17 21:37:19.036589000 +0000
>>> @@ -21,8 +21,10 @@
>>>  usage() {
>>>      cat -- <<USAGE 1>&2
>>> -usage: lfs_migrate [-c <stripe_count>] [-h] [-l] [-n] [-q] [-R] [-s] [-y] [-0]
>>> +usage: lfs_migrate [-A] [-c <stripe_count>] [-h] [-l] [-n] [-q] [-R] [-s] [-v] [-y] [-0]
>>>                     [file|dir ...]
>>> +    -A restripe file using an automatically selected stripe count
>>> +       currently Stripe Count = Log2(size_in_GB)
>>>      -c <stripe_count>
>>>         restripe file using the specified stripe count
>>>      -h show this usage message
>>> @@ -31,11 +33,11 @@
>>>      -q run quietly (don't print filenames or status)
>>>      -R restripe file using default directory striping
>>>      -s skip file data comparison after migrate
>>> +    -v be verbose and print information about each file
>>>      -y answer 'y' to usage question
>>>      -0 input file names on stdin are separated by a null character
>>> -The -c <stripe_count> option may not be specified at the same time as
>>> -the -R option.
>>> +Only one of the '-A', '-c', or '-R' options may be specified at a time.
>>>  If a directory is an argument, all files in the directory are migrated.
>>>  If no file/directory is given, the file list is read from standard input.
>>> @@ -48,15 +50,19 @@
>>>  OPT_CHECK=y
>>> -while getopts "c:hlnqRsy0" opt $*; do
>>> +while getopts "Ac:hlnqRsvy0" opt $*; do
>>>      case $opt in
>>>   l) OPT_NLINK=y;;
>>>   n) OPT_DRYRUN=n; OPT_YES=y;;
>>>   q) ECHO=:;;
>>>   R) OPT_RESTRIPE=y;;
>>>   s) OPT_CHECK="";;
>>> + v) OPT_VERBOSE=y;;
>>>   y) OPT_YES=y;;
>>>   0) OPT_NULL=y;;
>>>   h|\?) usage;;
>>> @@ -69,6 +75,16 @@
>>>   echo "$(basename $0) error: The -c <stripe_count> option may not" 1>&2
>>>   echo "be specified at the same time as the -R option." 1>&2
>>>   exit 1
>>> +elif [ "$OPT_STRIPE_COUNT" -a "$OPT_AUTOSTRIPE" ]; then
>>> + echo ""
>>> + echo "$(basename $0) error: The -c <stripe_count> option may not" 1>&2
>>> + echo "be specified at the same time as the -A option." 1>&2
>>> + exit 1
>>> +elif [ "$OPT_AUTOSTRIPE" -a "$OPT_RESTRIPE" ]; then
>>> + echo ""
>>> + echo "$(basename $0) error: The -A option may not be specified at" 1>&2
>>> + echo "the same time as the -R option." 1>&2
>>> + exit 1
>>>  fi
>>>  if [ -z "$OPT_YES" ]; then
>>> @@ -107,7 +123,7 @@
>>>   $ECHO -n "$OLDNAME: "
>>>   # avoid duplicate stat if possible
>>> - TYPE_LINK=($(LANG=C stat -c "%h %F" "$OLDNAME" || true))
>>> + TYPE_LINK=($(LANG=C stat -c "%h %F %s" "$OLDNAME" || true))
>>>   # skip non-regular files, since they don't have any objects
>>>   # and there is no point in trying to migrate them.
>>> @@ -127,11 +143,6 @@
>>>   continue
>>>   fi
>>> - if [ "$OPT_DRYRUN" ]; then
>>> - echo -e "dry run, skipped"
>>> - continue
>>> - fi
>>> -
>>>   if [ "$OPT_RESTRIPE" ]; then
>>>   UNLINK=""
>>>   else
>>> @@ -140,16 +151,43 @@
>>>   # then we don't need to do this getstripe/mktemp stuff.
>>>   UNLINK="-u"
>>> - COUNT=$($LFS getstripe -c "$OLDNAME" \
>>> - 2> /dev/null)
>>>   SIZE=$($LFS getstripe $LFS_SIZE_OPT "$OLDNAME" \
>>>         2> /dev/null)
>>> + if [ "$OPT_AUTOSTRIPE" ]; then
>>> + # (math in bash is dumb, so depend on common tools, and there are options for that...)
>>> + # Stripe Count = Log2(size_in_GB)
>>> + #COUNT=$(echo $FILE_SIZE | awk '{printf "%.0f\n",log($1/1024/1024/1024)/log(2)}')
>>> + #COUNT=$(printf "%.0f\n" $(echo "l($FILE_SIZE/1024/1024/1024) / l(2)" | bc -l))
>>> + COUNT=$(echo "l($FILE_SIZE/1024/1024/1024) / l(2) + 1" | bc -l | cut -d . -f 1)
>>> + # Stripe Count = size_in_GB
>>> + #COUNT=$(echo "scale=0; $FILE_SIZE/1024/1024/1024" | bc -l | cut -d . -f 1)
>> Instead of involving "bc", which is not guaranteed to be installed, why not just have a simple "divide by 2, increment stripe_count" loop after converting bytes to GiB?  That would be a few cycles for huge files, but probably still faster than fork/exec of an external binary as it could be at most 63 - 30 = 33 loops and usually many fewer.
>> Cheers, Andreas
>>> + [ "$COUNT" -lt 1 ] && COUNT=1
>>> + # (does it make sense to skip the file if old
>>> + # and new stripe count are identical?)
>>> + else
>>> + COUNT=$($LFS getstripe -c "$OLDNAME" \
>>> + 2> /dev/null)
>>> + fi
>>>   [ -z "$COUNT" -o -z "$SIZE" ] && UNLINK=""
>>>   fi
>>> + if [ "$OPT_DRYRUN" ]; then
>>> + if [ "$OPT_VERBOSE" ]; then
>>> + echo -e "dry run, would use count=${COUNT} size=${SIZE}"
>>> + else
>>> + echo -e "dry run, skipped"
>>> + fi
>>> + continue
>>> + fi
>>> + if [ "$OPT_VERBOSE" ]; then
>>> + echo -n "(count=${COUNT} size=${SIZE}) "
>>> + fi
>>> +
>>> + [ "$SIZE" ] && SIZE=${LFS_SIZE_OPT}${SIZE}
>>> +
>>>   # first try to migrate inside lustre
>>>   # if failed go back to old rsync mode
>>>   if [[ $RSYNC_MODE == false ]]; then

Cheers, Andreas
Andreas Dilger
Lustre Software Architect
Intel Corporation

More information about the lustre-discuss mailing list