[lustre-discuss] Write Performance is Abnormal for max_dirty_mb Value of 2047

Patrick Farrell pfarrell at ddn.com
Sun Mar 27 10:54:34 PDT 2022


Hasan,

Historically, there have been several bugs related to write grant when max_dirty_mb is set to large values (depending on a few other details of system setup).

Write grant allows the client to write data in to memory and write it out asynchronously.  When write grant is not available to the client, the client is forced to do sync writes at small sizes.  The result looks exactly like this, write performance drops severely.

Depending on what version you're running, you may not have fixes for these bugs.  You could either try a newer Lustre version (you didn't mention what you're running) or just use a smaller value of max_dirty_mb.

I am surprised to see you're still seeing a speedup from max_dirty_mb values over 1 GiB in size.

Can you describe your system a bit more?  How many OSTs do you have and how many stripes are you using?  max_dirty_mb is a per OST value on the client, not a global one.

-Patrick
________________________________
From: lustre-discuss <lustre-discuss-bounces at lists.lustre.org> on behalf of Hasan Rashid via lustre-discuss <lustre-discuss at lists.lustre.org>
Sent: Friday, March 25, 2022 11:45 AM
To: lustre-discuss at lists.lustre.org <lustre-discuss at lists.lustre.org>
Subject: [lustre-discuss] Write Performance is Abnormal for max_dirty_mb Value of 2047

Hi Everyone,

As the manual suggests, the valid value range for max_dirty_mb is the values larger than 0 and smaller than the lesser of 2048 MiB or 1/4 of client RAM. In my system, the client's RAM is 196 GiB. So, the maximum valid value for max_dirty_mb(mdm) is 2047 MiB.

However, when we set the max_dirty_mb value to 2047, we see very low write throughput for multiple Filebench workloads that we have tested so far. I am providing details for one example of the tested workload below.

Workload Detail: We are doing only random write operation of 1MiB size from one process and one thread to a single large file of 5GiB size.

Observed Result: As you can see from the below diagram, as we increase the mdm value from 768 to 1792 by an amount of 256 in each step, the write throughput has increased gradually. However, for the mdm value of 2047, the result dropped very significantly. The observation holds true for all the workloads we tested so far.


[https://lh3.googleusercontent.com/iEqpGNZhI9r9jJCLq0rWPvFADJRXkKKKZnyCV_8m3nhiHggNqWU9d_7WTUU0yeb011nxjULF4_iLkI7TIc0qe5el11PJI3i9Jot9KveXUil98A_UEnBojFqAHfK94ve1foQT39m2]

I am unable to figure out why we would have such low performance at the mdm value of 2047. Please share any insights you have that would be helpful for me to understand the aforementioned scenario.

Best Wishes,
Md Hasanur Rashid
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20220327/b46e367e/attachment.html>


More information about the lustre-discuss mailing list