[Lustre-discuss] noatime or atime_diff for Lustre 1.8.7?

Mark Day mark.day at rsp.com.au
Fri Dec 7 16:22:28 PST 2012


> 2) Make sure caching is enabled on the oss. 

How do you check/enable for this? Is it not enabled by default? 

Cheers, Mark 

----- Original Message -----

From: "Mohr Jr, Richard Frank (Rick Mohr)" <rmohr at utk.edu> 
To: "Grigory Shamov" <gas5x at yahoo.com> 
Cc: lustre-discuss at lists.lustre.org 
Sent: Saturday, 8 December, 2012 5:19:31 AM 
Subject: Re: [Lustre-discuss] noatime or atime_diff for Lustre 1.8.7? 

On Dec 6, 2012, at 2:58 PM, Grigory Shamov wrote: 

> So, on one of our OSS servers the load is now 160. According to collectl, only one OST does most of the job. (We dont do striping on this FS; unless users to it manually on their subdirectories). 

This sounds similar to situations we see every now and then. The load on the oss server climbs until it is roughly equally to the number of oss threads (which sounds like your case with load=oss_threads=160), but only a single ost is performing any significant IO. This seems to arise when parallel jobs access the same file which has stripe_count=1. The oss is bombarded with so many requests to a single ost that they backlog and tie up all the oss threads. At that point, all IO to the oss slows to a crawl no matter which ost on the oss is being used. This becomes problematic because even a modest sized job can effectively DOS and oss server. 

When you encounter these problems, is the IO to the affected ost primarly one-way (ie - mostly reads or mostly writes)? In our cases, we tend to see this when parallel jobs are reading from a common file. There are a couple of things that I have found that help: 

1) Increase the file striping a lot. This helps spread the load over more osts. We have had success with striping even relatively small files (~10 GB) over 100+ osts. Not only does it reduce load on the oss, but it usually speeds up the application significantly. 

2) Make sure caching is enabled on the oss. For us, this seems to help mostly when lots of processes are reading in the same file. 

Not sure if your situation is exactly like what I have seen, but maybe some of that info can help a bit. 

-- 
Rick Mohr 
Senior HPC System Administrator 
National Institute for Computational Sciences 
http://www.nics.tennessee.edu 


_______________________________________________ 
Lustre-discuss mailing list 
Lustre-discuss at lists.lustre.org 
http://lists.lustre.org/mailman/listinfo/lustre-discuss 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20121208/b235304e/attachment.htm>


More information about the lustre-discuss mailing list