[lustre-discuss] Max Single OSS throughput is not crossing 7 GB/s Reads

Karan Singh karanveersingh5623 at gmail.com
Tue Jun 28 22:24:08 PDT 2022


hahahha...its just a typo...just testing both in order to understand the
IOs performance .

mpirun -hostfile nodefile --map-by node -np 16 /usr/local/bin/ior -a POSIX
--posix.odirect -v -i 50 -g -F -e -k -o /mnt/lustre/test.ior -r -t
${xfersize} -b ${blksize} -O summaryFile=./${TESTDIR}/iorresult_SeqWrite_p${
numthreads}_bs${blksize}_tf${xfersize}.json,summaryFormat=JSON

On Wed, Jun 29, 2022 at 1:22 PM Andreas Dilger <adilger at whamcloud.com>
wrote:

> On Jun 28, 2022, at 21:51, Karan Singh via lustre-discuss <
> lustre-discuss at lists.lustre.org> wrote:
>
>
> Hi team
>
> Below are the details :
> Using 40 lustre docker clients running on 4 x Dell R750 with each lustre
> docker client running the below mentioned command
>
> ${xfersize} = ${blksize}
> ${blksize} = 128M, 64M, 32M, 16M, 8M, 4M, 2M, 1M
>
> mpirun -hostfile nodefile --map-by node -np 16 /usr/local/bin/ior -a POSIX
> --posix.odirect -v -i 50 -g -F -e -k -o /mnt/beegfs/test.ior
>
>
> I think the problem is pretty clear right here - you are using BeeGFS for
> your testing, not Lustre...  :-)
>
> -r -t ${xfersize} -b ${blksize} -O summaryFile=./${TESTDIR}/
> iorresult_SeqWrite_p${numthreads}_bs${blksize}_tf${
> xfersize}.json,summaryFormat=JSON
>
>
> <image.png>
>
> please let me know what is the bottleneck in the above setup ?
> Also if you need more info , will be provide it right away .
>
>
> Cheers, Andreas
> --
> Andreas Dilger
> Lustre Principal Architect
> Whamcloud
>
>
>
>
>
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20220629/1e56bb9f/attachment-0001.htm>


More information about the lustre-discuss mailing list