[lustre-discuss] OST partition sizes

Christopher J. Morrone morrone2 at llnl.gov
Wed Apr 29 11:44:57 PDT 2015


For me, at least, that did not clear up what you meant by "records 
size".  IOR would not have any thing that it sized at 1MB with the 
options that you gave as an example.

The only 1MB I can think of in the entire system is that the client may 
aggregate 128 of the sequential 8KB IOR writes into a single 1MB RPC. 
That aggregated 1MB could then be fed to the OST by the OSS in one write 
operation.

But none of that is explicitly known by IOR, so it would not make sense 
to me to call that a "record size...for IOR".

Chris

On 04/29/2015 09:38 AM, Alexander I Kulyavtsev wrote:
> ior/bin/IOR.mpiio.mvapich2-2.0b -h
>
> -t N  transferSize -- size of transfer in bytes (e.g.: 8, 4k, 2m, 1g)
>
> IOR reports it in the log :
>
> Command line used:
> /home/aik/lustre/benchmark/git/ior/bin/IOR.mpiio.mvapich2-2.0b -v -a
> MPIIO -i5 -g -e -w -r -b 16g -C -t 8k -o
> /mnt/lfs/admin/iotest/ior/stripe_2/ior-testfile.ssf
> ...
> Summary:
>
>          api                = MPIIO (version=3, subversion=0)
>          test filename      =
> /mnt/lfs/admin/iotest/ior/stripe_2/ior-testfile.ssf
>          access             = single-shared-file, independent
>          pattern            = segmented (1 segment)
>          ordering in a file = sequential offsets
>          ordering inter file=constant task offsets = 1
>          clients            = 32 (8 per node)
>          repetitions        = 5
>          xfersize           = 8192 bytes
>          blocksize          = 16 GiB
>          aggregate filesize = 512 GiB
>
> Here we have xfersize 8k, each client of 32 writes 16GB, so the
> aggregate file size is 512GB.
>
> I would expect records size to be ~1MB for our workloads.
>
> Best regards, Alex.
>
> On Apr 29, 2015, at 11:07 AM, Scott Nolin <scott.nolin at ssec.wisc.edu
> <mailto:scott.nolin at ssec.wisc.edu>> wrote:
>
>> Ok I looked up my notes.
>>
>> I'm not really sure what you mean by record size. I assumed when I do
>> a file per process the block size = file size. And that's what I see
>> dropped on the filesystem.
>>
>> I did -F -b <size>
>>
>> With block sizes 1MB, 20MB, 100MB, 200MB, 500MB
>>
>> 2, 4, 8, 16 threads on 1 to 4 clients.
>>
>> I assumed 2 threads on 1 client looks a lot like a client writing or
>> reading 2 files. I didn't bother looking at 1 thread.
>>
>> Later I just started doing 100MB tests since it's a very common file
>> size for us. Plus I didn't see real big difference once size gets
>> bigger than that.
>>
>> Scott
>>
>>
>> On 4/29/2015 10:24 AM, Alexander I Kulyavtsev wrote:
>>> What range of record sizes did you use for IOR? This is more important
>>> than file size.
>>> 100MB is small, overall data size (# of files) shall be twice as memory.
>>> I ran series of test for small record size for raidz2 10+2; will re-run
>>> some tests after upgrading to 0.6.4.1 .
>>>
>>> Single file performance differs substantially from file per process.
>>>
>>> Alex.
>>>
>>> On Apr 29, 2015, at 9:38 AM, Scott Nolin <scott.nolin at ssec.wisc.edu
>>> <mailto:scott.nolin at ssec.wisc.edu>
>>> <mailto:scott.nolin at ssec.wisc.edu>> wrote:
>>>
>>>> I used IOR, singlefile, 100MB files. That's the most important
>>>> workload for us. I tried several different file sizes, but 100MB
>>>> seemed a reasonable compromise for what I see the most. We rarely or
>>>> never do file striping.
>>>>
>>>> I remember I did see a difference between 10+2 and 8+2. Especially at
>>>> smaller numbers of clients and threads, the 8+2 performance numbers
>>>> were more consistent, made a smoother curve. 10+2 with not a lot of
>>>> threads the performance was more variable.
>>>
>>
>>
>> _______________________________________________
>> lustre-discuss mailing list
>> lustre-discuss at lists.lustre.org <mailto:lustre-discuss at lists.lustre.org>
>> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>
>
>
> _______________________________________________
> lustre-discuss mailing list
> lustre-discuss at lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>



More information about the lustre-discuss mailing list