[lustre-discuss] Lustre caching behavior

Oleg Drokin green at whamcloud.com
Tue Mar 24 16:56:31 PDT 2026


On Tue, 2026-03-24 at 23:42 +0000, Patrick Farrell wrote:
> 
> John,
> 
> Are you able to pin this down into more of a reproducer?  Even just a
> more granular description.
> 
> I’m curious to explore it - this is poor behavior, not desirable for
> sure.  I’m curious in particular to see about the lock cancellation -
> my understanding had been the glimpse request to read lock path was
> entirely opportunistic (NONBLOCKING in ldlm speak) - and would never
> cause a cancel (ie, my understanding doesn’t accord with Andreas’s).
>  I was pretty sure about that.

We definitely consider dropping unused the lock after glimpse:
/**
 * Callback handler for receiving incoming glimpse ASTs.
 *
 * This only can happen on client side.  After handling the glimpse AST
 * we also consider dropping the lock here if it is unused locally for
a
 * long time.
 */
static void ldlm_handle_gl_callback(struct ptlrpc_request *req,
                                    struct ldlm_namespace *ns,
                                    struct ldlm_request *dlm_req,
                                    struct ldlm_lock *lock)


...
        if (lock->l_granted_mode == LCK_PW &&
            !lock->l_readers && !lock->l_writers &&
            ktime_after(ktime_get(),
                        ktime_add(lock->l_last_used, ns-
>ns_dirty_age_limit))) {
...
                if (ldlm_bl_to_thread_lock(ns, ld, lock))
                        ldlm_handle_bl_callback(ns, ld, lock);

And ns_dirty_age_limit is 10 seconds.


John, what version are you testing on btw?
There were some lru changes in master very recently and I wonder if it
made any difference for this kind of workload.


More information about the lustre-discuss mailing list