[lustre-discuss] fiemap
Andreas Dilger
adilger at whamcloud.com
Thu Aug 18 14:11:58 PDT 2022
On Aug 18, 2022, at 14:28, John Bauer <bauerj at iodoctors.com<mailto:bauerj at iodoctors.com>> wrote:
Andreas,
Thanks for the reply. I don't think I'm accessing the Lustre filefrag ( see below ). Where would I normally find that installed? I downloaded the lustre-release git repository and can't find filefrag stuff to build my own. Is that somewhere else?
filefrag is part of the e2fsprogs package ("rpm -qf $(which filefrag)"), so you need to download and install the Lustre-patched e2fsprogs from https://downloads.whamcloud.com/public/e2fsprogs/latest/
More info:
pfe27.jbauer2 334> cat /sys/fs/lustre/version
2.12.8_ddn12
You should really use "lctl get_param version", since the Lustre /proc and /sys files move around on occasion.
The PFL/FLR change for FIEMAP is not included in this version, but it _should_ be irrelevant because the file you are testing is using a plain layout, not PFL or FLR.
pfe27.jbauer2 335> filefrag -v /nobackupp17/jbauer2/dd.dat
Filesystem type is: bd00bd0
File size of /nobackupp17/jbauer2/dd.dat is 104857600 (25600 blocks of 4096 bytes)
/nobackupp17/jbauer2/dd.dat: FIBMAP unsupported
pfe27.jbauer2 336> which filefrag
/usr/sbin/filefrag
John
On 8/18/22 14:57, Andreas Dilger wrote:
What version of Lustre are you using? Does "filefrag -v" from a newer Lustre e2fsprogs (1.45.6.wc3+) work properly?
There was a small change to the Lustre FIEMAP handling in order to handle overstriped files and PFL/FLR files with many stripes and multiple components, since the FIEMAP "restart" mechanism was broken for files that had multiple objects on the same OST index. See LU-11484 for details. That change was included in the 2.14.0 release.
In essence, the fe_device field now encodes the absolute file stripe number in the high 16 bits of fe_device, and the device number in the low 16 bits (as it did before). Since "filefrag -v" prints fe_device in hex and would show as "0x<stripe><device>" instead of "0x0000<device>", this was considered an acceptable tradeoff compared to other "less compatible" changes that would have been needed to implement PFL/FLR handling.
That said, I would have expected this change to result in your tool reporting very large values for fe_device (e.g. OST index + N * 65536), so returning all-zero values is somewhat unexpected.
Cheers, Andreas
On Aug 18, 2022, at 06:27, John Bauer <bauerj at iodoctors.com<mailto:bauerj at iodoctors.com>> wrote:
Hi all,
I am trying to get my llfie program (which uses fiemap) going again, but now the struct fiemap_extent structures I get back from the ioctl call, all have fe_device=0. The output from lfs getstripe indicates that the devices are not all 0. The sum of the fe_length members adds up to the file size, so that is working. The fe_physical members look reasonable also. Has something changed? This used to work.
Thanks, John
pfe27.jbauer2 300> llfie /nobackupp17/jbauer2/dd.dat
LustreStripeInfo_get() lum->lmm_magic=0xbd30bd0
listExtents() fe_physical=30643484360704 fe_device=0 fe_length=16777216
listExtents() fe_physical=30646084829184 fe_device=0 fe_length=2097152
listExtents() fe_physical=5705226518528 fe_device=0 fe_length=16777216
listExtents() fe_physical=5710209351680 fe_device=0 fe_length=2097152
listExtents() fe_physical=30621271326720 fe_device=0 fe_length=16777216
listExtents() fe_physical=31761568366592 fe_device=0 fe_length=16777216
listExtents() fe_physical=24757567225856 fe_device=0 fe_length=16777216
listExtents() fe_physical=14196460748800 fe_device=0 fe_length=16777216
listExtents() nMapped=8 byteCount=104857600
pfe27.jbauer2 301> lfs getstripe /nobackupp17/jbauer2/dd.dat
/nobackupp17/jbauer2/dd.dat
lmm_stripe_count: 6
lmm_stripe_size: 2097152
lmm_pattern: raid0
lmm_layout_gen: 0
lmm_stripe_offset: 126
lmm_pool: ssd-pool
obdidx objid objid group
126 13930025 0xd48e29 0
113 13115889 0xc821f1 0
120 14003176 0xd5abe8 0
109 12785483 0xc3174b 0
102 13811117 0xd2bdad 0
116 13377285 0xcc1f05 0
_______________________________________________
lustre-discuss mailing list
lustre-discuss at lists.lustre.org<mailto:lustre-discuss at lists.lustre.org>
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
Cheers, Andreas
--
Andreas Dilger
Lustre Principal Architect
Whamcloud
Cheers, Andreas
--
Andreas Dilger
Lustre Principal Architect
Whamcloud
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20220818/10730816/attachment-0001.htm>
More information about the lustre-discuss
mailing list