[Lustre-discuss] What's the human translation for: ost_write operation failed with -28

Thomas Guthmann tguthmann at iseek.com.au
Tue Dec 6 20:05:28 PST 2011


>>> FYI, I have the following values on the OSS it couldn't connect/write to :
>>> obdfilter.foobar-OST0003.tot_granted=17429659648
>>> obdfilter.foobar-OST0004.tot_granted=13648875520
>>> obdfilter.foobar-OST0005.tot_granted=18136141824
> By default, one single OSC should not own more than 32MB of grant space. With 18GB of total granted space, you should have ~560 clients. How many clients are mounting the filesystem?
Don't fret... 5 clients :)
And only 2 of them write to a define set of sparse files (no concurrent 
writes). No other files haven been created since AFAIK. At "day #1" of 
the lustre filesystem, we created 21 sparses files of 512GB each. Then 
the application wrote into them. We didn't write any other files except 
2 new 512GB sparse files a month ago. (This explain why we have a very 
low number of used inodes - see previous email for lfs df -i).

>>> But, again, my application was writing into sparse files so the space was
>>> already allocated... and the sparse files haven't grown.
> Lustre (like most filesystems) does not allocate blocks for "holes" in sparse files.
Hmm, what do you mean ? It works like any other filesystems and so I 
should haven't hit a grant issue ?


More information about the lustre-discuss mailing list