[Lustre-discuss] stripe don't work

Alfonso Pardo alfonso.pardo at ciemat.es
Mon May 28 07:22:23 PDT 2012


GOAL!

You are the best!

/# lfs setstripe -c -1 -s 1M /mnt/data/
# dd if=/dev/zero of=test bs=1M count=4
# lfs getstripe test
test
lmm_stripe_count:   18
lmm_stripe_size:    1048576
lmm_stripe_offset:  6
     obdidx         objid        objid         group
          6                 2              0x2                 0
          8                 2              0x2                 0
          9                 2              0x2                 0
         12                 2              0x2                 0
         14                 2              0x2                 0
         16                 2              0x2                 0
          1                 2              0x2                 0
          3                 2              0x2                 0
          5                 2              0x2                 0
          7                 2              0x2                 0
         10                 2              0x2                 0
         11                 2              0x2                 0
         13                 2              0x2                 0
         15                 3              0x3                 0
         17                 3              0x3                 0
          0                 3              0x3                 0
          2                 3              0x3                 0
          4                 2              0x2                 0/



On 28/05/12 16:09, Colin Faber wrote:
>
> Hi,
>
> The output indicates stripe count of 1. Use lfs setstripe to set the 
> stripe count to something other than 1. I.e. -c -1 for all osts.
>
> On May 28, 2012 7:34 AM, "Alfonso Pardo" <alfonso.pardo at ciemat.es 
> <mailto:alfonso.pardo at ciemat.es>> wrote:
>
>     Hello!
>
>     I just migrate my lustre filesystem to version 2.2. I am trying to
>     activate the stripe, but when I copy a file into the filesystem,
>     it's allocated into only one OST.
>
>     This is my procedure:
>
>     /From client:
>     //#> //lfs setstripe -s 1M /mnt/data/
>     //#> //lfs df
>
>     UUID                   1K-blocks        Used   Available Use%
>     Mounted on
>     cetafs-MDT0000_UUID   1462920404      501360  1364873324   0%
>     /mnt/data[MDT:0]
>     cetafs-OST0000_UUID   9760101272      450304  9271374936   0%
>     /mnt/data[OST:0]
>     cetafs-OST0001_UUID   9760101272      450304  9271374936   0%
>     /mnt/data[OST:1]
>     cetafs-OST0002_UUID   9760101272      450304  9271374936   0%
>     /mnt/data[OST:2]
>     cetafs-OST0003_UUID   9760101272      450304  9271374936   0%
>     /mnt/data[OST:3]
>     cetafs-OST0004_UUID   9760101272      450304  9267055468   0%
>     /mnt/data[OST:4]
>     cetafs-OST0005_UUID   9760101272      450304  9271366744   0%
>     /mnt/data[OST:5]
>     cetafs-OST0006_UUID   9760101272      450304  9271374936   0%
>     /mnt/data[OST:6]
>     cetafs-OST0007_UUID   9760101272      450304  9271366744   0%
>     /mnt/data[OST:7]
>     cetafs-OST0008_UUID   9760101272      450304  9271374936   0%
>     /mnt/data[OST:8]
>     cetafs-OST0009_UUID   9760101272      450304  9271374936   0%
>     /mnt/data[OST:9]
>     cetafs-OST000a_UUID   9760101272      450304  9271374936   0%
>     /mnt/data[OST:10]
>     cetafs-OST000b_UUID   9760101272      450304  9271374936   0%
>     /mnt/data[OST:11]
>     cetafs-OST000c_UUID   9760101272      450304  9271374936   0%
>     /mnt/data[OST:12]
>     cetafs-OST000d_UUID   9760101272      450304  9271374936   0%
>     /mnt/data[OST:13]
>     cetafs-OST000e_UUID   9760101272      450304  9271374936   0%
>     /mnt/data[OST:14]
>     cetafs-OST000f_UUID   9760101272       450304  9271374936   0%
>     /mnt/data[OST:15]
>     cetafs-OST0010_UUID   9760101272      450304  9271374936   0%
>     /mnt/data[OST:16]
>     cetafs-OST0011_UUID   9760101272      450304  9271374936   0%
>     /mnt/data[OST:17]
>
>     filesystem summary:  175681822896     8105472 166884732464   0%
>     /mnt/data
>
>
>     /Then, I copy a 4'5GB file, but...:/
>
>     #> lfs getstripe CentOS-6.2-x86_64-bin-DVD1.iso
>     CentOS-6.2-x86_64-bin-DVD1.iso
>     lmm_stripe_count:   1
>     lmm_stripe_size:    262144
>     lmm_stripe_offset:  4
>         obdidx         objid        objid         group
>              4                 2              0x2                 0
>
>     //#> //lfs df
>     UUID                   1K-blocks        Used   Available Use%
>     Mounted on
>     cetafs-MDT0000_UUID   1462920404      501360  1364873324   0%
>     /mnt/data[MDT:0]
>     cetafs-OST0000_UUID   9760101272      450304  9271374936   0%
>     /mnt/data[OST:0]
>     cetafs-OST0001_UUID   9760101272      450304  9271374936   0%
>     /mnt/data[OST:1]
>     cetafs-OST0002_UUID   9760101272      450304  9271374936   0%
>     /mnt/data[OST:2]
>     cetafs-OST0003_UUID   9760101272      450304  9271374936   0%
>     /mnt/data[OST:3]
>     cetafs-OST0004_UUID   9760101272     4769772  9267055468   0%
>     /mnt/data[OST:4]
>     cetafs-OST0005_UUID   9760101272      450304  9271366744   0%
>     /mnt/data[OST:5]
>     cetafs-OST0006_UUID   9760101272      450304  9271374936   0%
>     /mnt/data[OST:6]
>     cetafs-OST0007_UUID   9760101272      450304  9271366744   0%
>     /mnt/data[OST:7]
>     cetafs-OST0008_UUID   9760101272      450304  9271374936   0%
>     /mnt/data[OST:8]
>     cetafs-OST0009_UUID   9760101272      450304  9271374936   0%
>     /mnt/data[OST:9]
>     cetafs-OST000a_UUID   9760101272      450304  9271374936   0%
>     /mnt/data[OST:10]
>     cetafs-OST000b_UUID   9760101272      450304  9271374936   0%
>     /mnt/data[OST:11]
>     cetafs-OST000c_UUID   9760101272      450304  9271374936   0%
>     /mnt/data[OST:12]
>     cetafs-OST000d_UUID   9760101272      450304  9271374936   0%
>     /mnt/data[OST:13]
>     cetafs-OST000e_UUID   9760101272      450304  9271374936   0%
>     /mnt/data[OST:14]
>     cetafs-OST000f_UUID   9760101272      450304  9271374936   0%
>     /mnt/data[OST:15]
>     cetafs-OST0010_UUID   9760101272      450304  9271374936   0%
>     /mnt/data[OST:16]
>     cetafs-OST0011_UUID   9760101272      450304  9271374936   0%
>     /mnt/data[OST:17]
>
>     filesystem summary:  175681822896    12424940 166880412996   0%
>     /mnt/data/
>
>     The file is only allocated into OST with index 4
>
>
>     Can someone help me?
>
>
>     Thanks!!!!!!!!
>     -- 
>
>     /Alfonso Pardo Díaz/
>
>     */IT Manager/*
>     /c/ Sola nº 1; 10200 Trujillo, ESPAÑA/
>     /Tel: +34 927 65 93 17 Fax: +34 927 32 32 37
>     <tel:%2B34%20927%2032%2032%2037>/
>
>     CETA-Ciemat logo <http://www.ceta-ciemat.es/>
>
>     ---------------------------- Confidencialidad: Este mensaje y sus
>     ficheros adjuntos se dirige exclusivamente a su destinatario y
>     puede contener información privilegiada o confidencial. Si no es
>     vd. el destinatario indicado, queda notificado de que la
>     utilización, divulgación y/o copia sin autorización está prohibida
>     en virtud de la legislación vigente. Si ha recibido este mensaje
>     por error, le rogamos que nos lo comunique inmediatamente
>     respondiendo al mensaje y proceda a su destrucción. Disclaimer:
>     This message and its attached files is intended exclusively for
>     its recipients and may contain confidential information. If you
>     received this e-mail in error you are hereby notified that any
>     dissemination, copy or disclosure of this communication is
>     strictly prohibited and may be unlawful. In this case, please
>     notify us by a reply and delete this email and its contents
>     immediately. ----------------------------
>
>     _______________________________________________
>     Lustre-discuss mailing list
>     Lustre-discuss at lists.lustre.org
>     <mailto:Lustre-discuss at lists.lustre.org>
>     http://lists.lustre.org/mailman/listinfo/lustre-discuss
>


-- 

/Alfonso Pardo Díaz/

*/IT Manager/*
/c/ Sola nº 1; 10200 Trujillo, ESPAÑA/
/Tel: +34 927 65 93 17 Fax: +34 927 32 32 37/

CETA-Ciemat logo <http://www.ceta-ciemat.es/>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20120528/a47225a3/attachment.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: image/png
Size: 26213 bytes
Desc: not available
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20120528/a47225a3/attachment.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image001.png
Type: image/png
Size: 26213 bytes
Desc: not available
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20120528/a47225a3/attachment-0001.png>


More information about the lustre-discuss mailing list