[lustre-discuss] Robinhood scan time

Iannetti, Gabriele G.Iannetti at gsi.de
Wed Dec 9 01:49:08 PST 2020


Hi Amit,

we also faced very slow full scan performance before.

As was mentioned before by Aurélien it is essential to investigate the processing stages within the Robinhood logs.

In our setup the GET_FID stage was the bottleneck, since the stage had a relatively low total number of entries processed more often.
So increasing the number of nb_threads_scan helped.

Of course other stages e.g. DB_APPLY with relatively low total number of entries processed can indicate a bottleneck on the database.
So you have to keep in mind that there are multiple layers to take into consideration for performance tuning.

For running multiple file system scan tests you could consider doing a partial scan (with same test data) with Robinhood instead of scanning the hole file system, which will take much more time.

I would like to share a diagram with you, where you can see a comparision with nb_threads_scan 64 vs 2.
This was the maximum we have tested so far. In the production system the number is set to 48.
Since more is not always better. As far as I can remember we hit issues with the main memory then.

Best regards
Gabriele



________________________________________
From: lustre-discuss <lustre-discuss-bounces at lists.lustre.org> on behalf of Degremont, Aurelien <degremoa at amazon.com>
Sent: Tuesday, December 8, 2020 10:39
To: Kumar, Amit; Stephane Thiell
Cc: lustre-discuss at lists.lustre.org
Subject: Re: [lustre-discuss] Robinhood scan time

There could be lots of difference between these 2 systems.
- What is the backend FS type? (ZFS or LDiskfs)
- How many MDT do you have?
- Is 2 threads enough to maximum your scan throughput? Stephane said he used 4 and 8 of them.
- What the workload running on the MDT at the same time, is it overloaded already by your users' jobs?

Robinhood is also dumping its pipeline stats regularly in the logs. You can spot which step of the pipeline is slowing you down.

Aurélien

Le 07/12/2020 20:59, « Kumar, Amit » <ahkumar at mail.smu.edu> a écrit :


    Hi Stephane & Aurélien

    Here are the stats that I see in my logs:

    Below is the best and worst avg. speed I noted in the log, with nb_threads_scan=2:
    2020/11/03 16:51:04 [4850/3] STATS |      avg. speed  (effective):    618.32 entries/sec (3.23 ms/entry/thread)
    2020/11/25 18:06:10 [4850/3] STATS |      avg. speed  (effective):    187.93 entries/sec (10.62 ms/entry/thread)

    Finally the full scan results are below:
    2020/11/25 17:13:41 [4850/4] FS_Scan | Full scan of /scratch completed, 369729104 entries found (123 errors). Duration = 1964257.21s

    Stephane, now I wonder what could have caused poor scanning performance. Once I kicked off my initial scan during the LAD with same number of threads(2) my scan along with some users jobs in the following days caused opening and closing of file 150-200 million file operations and as a result filled up my change log too soon than I expected.  I had to cancel the first initial scan to bring the situation under control. After I cleared change log, I asked Robinhood to perform a new full scan. I am not sure if this cancel and restart could have caused delays with additional lookup into database for existing entries of already scanned 200millions files by then? Other thing your point out is you have RAID10 SSD, on our end I have RAID-5 3.6TB of SSD's, this probably explains the slowness?

    I wasn't sure of the impact of the scan hence chose only 2 threads, I am guessing I could bump that up to 4 next times to see if the benefits my scan times.

    Thank you,
    Amit

    -----Original Message-----
    From: Stephane Thiell <sthiell at stanford.edu>
    Sent: Monday, December 7, 2020 11:43 AM
    To: Degremont, Aurelien <degremoa at amazon.com>
    Cc: Kumar, Amit <ahkumar at mail.smu.edu>; Russell Dekema <dekemar at umich.edu>; lustre-discuss at lists.lustre.org
    Subject: Re: [lustre-discuss] Robinhood scan time

    Hi Amit,

    Your number is very low indeed.

    At our site, we're seeing ~100 million files/day during a Robinhood scan with nb_threads_scan =4 and on hardware using Intel based CPUs:

    2020/11/16 07:29:46 [126653/2] STATS |      avg. speed  (effective):   1207.06 entries/sec (3.31 ms/entry/thread)

    2020/11/16 07:31:44 [126653/29] FS_Scan | Full scan of /oak completed, 1508197871 entries found (65 errors). Duration = 1249490.23s

    In that case, our Lustre MDS and Robinhood server are running all on 2 x CPU E5-2643 v3 @ 3.40GHz.
    The Robinhood server has 768GB of RAM and 7TB of SSDs in RAID-10 for the DB.

    On another filesystem, using AMD Naples -based CPUs and a dedicated Robinhood DB, hosted a different server with AMD Rome CPUs, we’re seeing a rate of 266M/day during a Robinhood scan with nb_threads_scan = 8:

    2020/09/20 21:43:46 [25731/4] FS_Scan | Full scan of /fir completed, 877905438 entries found (744 errors). Duration = 284564.88s


    Best,

    Stephane

    > On Dec 7, 2020, at 4:49 AM, Degremont, Aurelien <degremoa at amazon.com> wrote:
    >
    > Hi Amit,
    >
    > Thanks for this data point, that's interesting.
    > Robinhood prints a scan summary in its logfile at the end of scan. It could be nice if you can copy/paste it, for further reference.
    >
    > Aurélien
    >
    > Le 04/12/2020 23:39, « lustre-discuss au nom de Kumar, Amit » <lustre-discuss-bounces at lists.lustre.org au nom de ahkumar at mail.smu.edu> a écrit :
    >
    >    CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.
    >
    >
    >
    >    Dual Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz;
    >    256GB RAM
    >    System x3650 M5
    >    Storage for MDT is from NetApp EF560.
    >
    >    Best regards,
    >    Amit
    >
    >    -----Original Message-----
    >    From: Russell Dekema <dekemar at umich.edu>
    >    Sent: Friday, December 4, 2020 4:27 PM
    >    To: Kumar, Amit <ahkumar at mail.smu.edu>
    >    Cc: lustre-discuss at lists.lustre.org
    >    Subject: Re: [lustre-discuss] Robinhood scan time
    >
    >    Greetings,
    >
    >    What kind of hardware are you running on your metadata array?
    >
    >    Cheers,
    >    Rusty Dekema
    >
    >    On Fri, Dec 4, 2020 at 5:12 PM Kumar, Amit <ahkumar at mail.smu.edu> wrote:
    >>
    >> HI All,
    >>
    >>
    >>
    >> During LAD’20 Andreas mentioned if I could share the Robinhood scan time for the 369millions files we have. So here it is. It took ~23 days for me to complete initial scan of all 369 million files, on a dedicated robinhood server that has 384GB RAM. I had it setup with all tweaks for database and client that was mentioned in Robinhood document. I only used 2 threads for this scan. Hope this reference helps.
    >>
    >>
    >>
    >> Thank you,
    >>
    >> Amit
    >>
    >>
    >>
    >> _______________________________________________
    >> lustre-discuss mailing list
    >> lustre-discuss at lists.lustre.org
    >> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
    >
    >
    >    ----IF CLASSIFICATION START----
    >
    >    ----IF CLASSIFICATION END----
    >    _______________________________________________
    >    lustre-discuss mailing list
    >    lustre-discuss at lists.lustre.org
    >    http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
    >
    > _______________________________________________
    > lustre-discuss mailing list
    > lustre-discuss at lists.lustre.org
    > http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


_______________________________________________
lustre-discuss mailing list
lustre-discuss at lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
-------------- next part --------------
A non-text attachment was scrubbed...
Name: fullscan_lustre_64_vs_2_nb_threads_scan_get_fid_stage_comparision.svg
Type: image/svg+xml
Size: 173764 bytes
Desc: fullscan_lustre_64_vs_2_nb_threads_scan_get_fid_stage_comparision.svg
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20201209/13b66ac3/attachment-0001.svg>


More information about the lustre-discuss mailing list