[lustre-discuss] fiemap
John Bauer
bauerj at iodoctors.com
Thu Aug 18 14:59:46 PDT 2022
Andreas,
Well, that works. I got the devices I would expect. The ioctl() calls
look identical. The lengths are identical ( allowing for 1024 factor
). But my devices are 0. Thanks for getting me going with the correct
filefrag. I'll report back when I sort out my problem.
John
pfe27.jbauer2 390> strace -o filefrag.strace ./misc/filefrag -v /nobackupp17/jbauer2/dd.dat
Filesystem type is: bd00bd0
File size of /nobackupp17/jbauer2/dd.dat is 104857600 (102400 blocks of 1024 bytes)
ext: device_logical: physical_offset: length: dev: flags:
0: 0.. 13311: 33431977984.. 33431991295: 13312: 0008: net
1: 0.. 13311: 164044554240..164044567551: 13312: 0009: net
2: 0.. 13311: 539103838208..539103851519: 13312: 000a: net
3: 0.. 13311: 48145154048.. 48145167359: 13312: 000b: net
4: 0.. 12287: 168782233600..168782245887: 12288: 000c: net
5: 0.. 12287: 168137900032..168137912319: 12288: 000d: net
6: 0.. 12287: 18729435136.. 18729447423: 12288: 000e: net
7: 0.. 12287: 163376496640..163376508927: 12288: 000f: last,net
/nobackupp17/jbauer2/dd.dat: 8 extents found
strace lines of interest for filefrag
ioctl(3,FS_IOC_FIEMAP,{fm_start=0, fm_length=18446744073709551615, fm_flags=0x40000000 /* FIEMAP_FLAG_??? */, fm_extent_count=292} =>{fm_flags=0x40000000 /* FIEMAP_FLAG_??? */, fm_mapped_extents=8, ...}) = 0
write(1," ext: device_logical: "...,75) = 75
write(1," 0: 0.. 13311: 33431"...,72) = 72
write(1," 1: 0.. 13311: 164044"...,72) = 72
strace lines of interest for llfie
ioctl(3,FS_IOC_FIEMAP,{fm_start=0, fm_length=18446744073709551615, fm_flags=0x40000000 /* FIEMAP_FLAG_??? */, fm_extent_count=1024} =>{fm_flags=0x40000000 /* FIEMAP_FLAG_??? */, fm_mapped_extents=8, ...}) = 0
write(2,"listExtents() fe_physical=342343"...,72) = 72
write(2,"listExtents() fe_physical=167981"...,73) = 73
write(2,"listExtents() fe_physical=552042"...,73) = 73
write(2,"listExtents() fe_physical=493006"...,72) = 72
write(2,"listExtents() fe_physical=172833"...,73) = 73
On 8/18/22 16:11, Andreas Dilger wrote:
> On Aug 18, 2022, at 14:28, John Bauer <bauerj at iodoctors.com> wrote:
>>
>> Andreas,
>>
>> Thanks for the reply. I don't think I'm accessing the Lustre
>> filefrag ( see below ). Where would I normally find that installed?
>> I downloaded the lustre-release git repository and can't find
>> filefrag stuff to build my own. Is that somewhere else?
>>
> filefrag is part of the e2fsprogs package ("rpm -qf $(which
> filefrag)"), so you need to download and install the Lustre-patched
> e2fsprogs from _https://downloads.whamcloud.com/public/e2fsprogs/latest/_
>
>> More info:
>>
>> pfe27.jbauer2 334> cat /sys/fs/lustre/version
>> 2.12.8_ddn12
>
> You should really use "lctl get_param version", since the Lustre /proc
> and /sys files move around on occasion.
>
> The PFL/FLR change for FIEMAP is not included in this version, but it
> _should_ be irrelevant because the file you are testing is using a
> plain layout, not PFL or FLR.
>> pfe27.jbauer2 335> filefrag -v /nobackupp17/jbauer2/dd.dat
>> Filesystem type is: bd00bd0
>> File size of /nobackupp17/jbauer2/dd.dat is 104857600 (25600 blocks of 4096 bytes)
>> /nobackupp17/jbauer2/dd.dat: FIBMAP unsupported
>>
>> pfe27.jbauer2 336> which filefrag
>> /usr/sbin/filefrag
>>
>>
>> John
>>
>> On 8/18/22 14:57, Andreas Dilger wrote:
>>> What version of Lustre are you using? Does "filefrag -v" from a
>>> newer Lustre e2fsprogs (1.45.6.wc3+) work properly?
>>>
>>> There was a small change to the Lustre FIEMAP handling in order to
>>> handle overstriped files and PFL/FLR files with many stripes and
>>> multiple components, since the FIEMAP "restart" mechanism was broken
>>> for files that had multiple objects on the same OST index. See
>>> LU-11484 for details. That change was included in the 2.14.0 release.
>>>
>>> In essence, the fe_device field now encodes the absolute file stripe
>>> number in the high 16 bits of fe_device, and the device number in
>>> the low 16 bits (as it did before). Since "filefrag -v" prints
>>> fe_device in hex and would show as "0x<stripe><device>" instead of
>>> "0x0000<device>", this was considered an acceptable tradeoff
>>> compared to other "less compatible" changes that would have been
>>> needed to implement PFL/FLR handling.
>>>
>>> That said, I would have expected this change to result in your tool
>>> reporting very large values for fe_device (e.g. OST index + N *
>>> 65536), so returning all-zero values is somewhat unexpected.
>>>
>>> Cheers, Andreas
>>>
>>>> On Aug 18, 2022, at 06:27, John Bauer <bauerj at iodoctors.com> wrote:
>>>>
>>>> Hi all,
>>>>
>>>> I am trying to get my llfie program (which uses fiemap) going
>>>> again, but now the struct fiemap_extent structures I get back from
>>>> the ioctl call, all have fe_device=0. The output from lfs
>>>> getstripe indicates that the devices are not all 0. The sum of the
>>>> fe_length members adds up to the file size, so that is working.
>>>> The fe_physical members look reasonable also. Has something
>>>> changed? This used to work.
>>>>
>>>> Thanks, John
>>>>
>>>> pfe27.jbauer2 300> llfie /nobackupp17/jbauer2/dd.dat
>>>> LustreStripeInfo_get() lum->lmm_magic=0xbd30bd0
>>>> listExtents() fe_physical=30643484360704 fe_device=0 fe_length=16777216
>>>> listExtents() fe_physical=30646084829184 fe_device=0 fe_length=2097152
>>>> listExtents() fe_physical=5705226518528 fe_device=0 fe_length=16777216
>>>> listExtents() fe_physical=5710209351680 fe_device=0 fe_length=2097152
>>>> listExtents() fe_physical=30621271326720 fe_device=0 fe_length=16777216
>>>> listExtents() fe_physical=31761568366592 fe_device=0 fe_length=16777216
>>>> listExtents() fe_physical=24757567225856 fe_device=0 fe_length=16777216
>>>> listExtents() fe_physical=14196460748800 fe_device=0 fe_length=16777216
>>>> listExtents() nMapped=8 byteCount=104857600
>>>>
>>>>
>>>> pfe27.jbauer2 301> lfs getstripe /nobackupp17/jbauer2/dd.dat
>>>> /nobackupp17/jbauer2/dd.dat
>>>> lmm_stripe_count: 6
>>>> lmm_stripe_size: 2097152
>>>> lmm_pattern: raid0
>>>> lmm_layout_gen: 0
>>>> lmm_stripe_offset: 126
>>>> lmm_pool: ssd-pool
>>>> obdidxobjidobjidgroup
>>>> 126 13930025 0xd48e29 0
>>>> 113 13115889 0xc821f1 0
>>>> 120 14003176 0xd5abe8 0
>>>> 109 12785483 0xc3174b 0
>>>> 102 13811117 0xd2bdad 0
>>>> 116 13377285 0xcc1f05 0
>>>>
>>>> _______________________________________________
>>>> lustre-discuss mailing list
>>>> lustre-discuss at lists.lustre.org
>>>> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>>>
>>> Cheers, Andreas
>>> --
>>> Andreas Dilger
>>> Lustre Principal Architect
>>> Whamcloud
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>
> Cheers, Andreas
> --
> Andreas Dilger
> Lustre Principal Architect
> Whamcloud
>
>
>
>
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20220818/b10c3588/attachment.htm>
More information about the lustre-discuss
mailing list