[lustre-discuss] Degraded read performance with Large Bulk IO (16MB RPC)

Pinkesh Valdria pinkesh.valdria at oracle.com
Tue Dec 10 00:39:51 PST 2019


I was expecting better or same read performance with Large Bulk IO (16MB RPC),  but I see degradation in performance.   Do I need to tune any other parameter to benefit from Large Bulk IO?   Appreciate if I can get any pointers to troubleshoot further. 

 

Throughput before 
Read:  2563 MB/s
Write:  2585 MB/s
 

Throughput after
Read:  1527 MB/s. (down by ~1025)
Write:  2859 MB/s
 

 

Changes I did are: 

On oss
lctl set_param obdfilter.lfsbv-*.brw_size=16
 

On clients 
unmounted and remounted
lctl set_param osc.lfsbv-OST*.max_pages_per_rpc=4096  (got auto-updated after re-mount)
lctl set_param osc.*.max_rpcs_in_flight=64   (Had to manually increase this to 64,  since after re-mount, it was auto-set to 8,  but read/write performance was poor)
lctl set_param osc.*.max_dirty_mb=2040. (setting the value to 2048 was failing with : Numerical result out of range error.   Previously it was set to 2000 when I got good performance. 
 

 

My other settings: 
lnetctl net add --net tcp1 --if $interface  –peer-timeout 180 –peer-credits 128 –credits 1024
echo "options ksocklnd nscheds=10 sock_timeout=100 credits=2560 peer_credits=63 enable_irq_affinity=0"  >  /etc/modprobe.d/ksocklnd.conf
lfs setstripe -c 1 -S 1M /mnt/mdt_bv/test1
 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20191210/bb5f814d/attachment-0001.html>


More information about the lustre-discuss mailing list