[Lustre-discuss] Disabling locks in Lustre

Oleg Drokin oleg.drokin at oracle.com
Fri Aug 27 15:35:36 PDT 2010


Hello!

On Aug 26, 2010, at 1:07 PM, Dulcardo Arteaga Clavijo wrote:

> I am trying to compare the performance of Lustre for parallel write to
> a shared file with locks and
> without locks. But after doing some experiments I didn't see any
> performance improvement when I run without locks.

It all depends on your io & striping pattern. In ideal case where IO pattern
matches striping (e.g. you have a task that writes a file in interleaved
4M chunks from N threads and your stripe size is 4M, file striped across N OSTs)
there would be no benefit.

> My environment is composed by 7 OST/ODT, 1 MGS/MDT, and 32 Clients. In the
> experiment every Client writes 100MB into a shared file.
> 
> Does it exits a difference when running Lustre without locks?
> I am using  ioctl(fd, LL_IOC_SETFLAGS, LL_FILE_IGNORE_LOCK) to disable locks.

Please note that LL_FILE_IGNORE_LOCK only works if you do direct IO, but
when you do Direct IO, there is no caching and all writes are synchronous
which totally defeats all possible gains by e.g. cache aggregation,
and totally hides all losses from slower lock cancellation where we would need
to flush dirty data (since there is no dirty cached data in case of direct io).

Also you did not specify what lustre version you are using, but if you are using
1.8.2+ (but not 2.0) a feature was introduced in bug 18801 that actually disables
client locking during direct IO as pointless and switches to server-side locking
which does not have any overhead for shared file writing as long as you do not
have overlapping writes.

If you want a more realistic test, you need to avoid using direct IO and instead normal
IO. Just do shared file writes for the baseline test, then
use "group lock" feature (LL_IOC_GROUP_LOCK ioctl, arg should be the same across
all of your threads so that they "share a lock" and don't conflict).

Bye,
    Oleg


More information about the lustre-discuss mailing list