<html>
<head>
<meta http-equiv="Content-Type" content="text/html;
charset=windows-1252">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<p>Patrick,</p>
<p><br>
</p>
<p>By socket-level, I am referring to a physical socket. It seems
that increasing the number of cores for an mpirun or ior doesn't
increase total throughput unless it is adding another physical
socket. <br>
</p>
<p>I'm pretty sure the network and OSTs can handle the traffic. I
have tested the network to 40Gb/s with iperf and the OSTs are all
NVMe</p>
<p>I have used 1, 2 and 3 clients by using an mpi-io copy program.
It will read from one file on lustre and write it to another, with
each worker reading in its portion of the file.</p>
<p><br>
</p>
<p>Hmm. I shall try doing multiple copies at the same time to see
what happens. That, I hadn't tested.</p>
<p>We are using Lustre 2.10.51-1 under CentOS 7 kernel
3.10.0-514.26.2<br>
</p>
<br>
Brian <br>
<br>
<div class="moz-cite-prefix">On 8/30/2017 9:32 AM, Patrick Farrell
wrote:<br>
</div>
<blockquote type="cite"
cite="mid:BN6PR1101MB21324A2B07245AD371514549CB9C0@BN6PR1101MB2132.namprd11.prod.outlook.com">
<meta http-equiv="Content-Type" content="text/html;
charset=windows-1252">
<meta name="Generator" content="Microsoft Exchange Server">
<!-- converted from text -->
<style><!-- .EmailQuote { margin-left: 1pt; padding-left: 4pt; border-left: #800000 2px solid; } --></style>
<meta content="text/html; charset=UTF-8">
<style type="text/css" style="">
<!--
p
{margin-top:0;
margin-bottom:0}
-->
</style>
<div dir="ltr">
<div id="x_divtagdefaultwrapper" dir="ltr"
style="font-size:12pt; color:#000000;
font-family:Calibri,Helvetica,sans-serif">
<p>Brian,</p>
<p><br>
</p>
<p>I'm not sure what you mean by "socket level".<br>
</p>
<br>
<p>A starter question:<br>
How fast are your OSTs? Are you sure the limit isn't the
OST? (Easy way to test - Multiple files on that OST from
multiple clients, see how that performs)</p>
<div>(lfs setstripe -i [index] to set the OST for a singly
striped file)</div>
<br>
<p>In general, you can get ~1.3-1.8 GB/s from one process to
one file with a recent-ish Xeon, if your OSTs and network
can handle it. There are a number of other factors that can
get involved in limiting your bandwidth with multiple
threads.</p>
<p><br>
</p>
<p>It sounds like you're always (in the numbers you report)
using one client at a time. Is that correct?<br>
</p>
<p><br>
</p>
<p>I suspect that you're limited in bandwidth to a specific
OST, either by the OST or by the client settings. What's
your bandwidth limit from one client to multiple files on
the same OST? Is it that same 1.5 GB/s?</p>
<p><br>
</p>
<p>If so (or even if it's close), you may need to increase
your clients RPC size (max_pages_per_rpc in
/proc/fs/lustre/osc/[OST]/), or max_rpcs_in_flight (same
place). Note if you increase those you need to increase
max_dirty_mb (again, same place). The manual describes the
relationship. </p>
<p><br>
</p>
<p>Also - What version of Lustre are you running? Client
& server.<br>
</p>
<p><br>
</p>
<p>- Patrick<br>
</p>
</div>
<hr tabindex="-1" style="display:inline-block; width:98%">
<div id="x_divRplyFwdMsg" dir="ltr"><font style="font-size:11pt"
face="Calibri, sans-serif" color="#000000"><b>From:</b>
lustre-discuss
<a class="moz-txt-link-rfc2396E" href="mailto:lustre-discuss-bounces@lists.lustre.org"><lustre-discuss-bounces@lists.lustre.org></a> on behalf of
Brian Andrus <a class="moz-txt-link-rfc2396E" href="mailto:toomuchit@gmail.com"><toomuchit@gmail.com></a><br>
<b>Sent:</b> Wednesday, August 30, 2017 11:16:08 AM<br>
<b>To:</b> <a class="moz-txt-link-abbreviated" href="mailto:lustre-discuss@lists.lustre.org">lustre-discuss@lists.lustre.org</a><br>
<b>Subject:</b> [lustre-discuss] Bandwidth bottleneck at
socket?</font>
<div> </div>
</div>
</div>
<font size="2"><span style="font-size:10pt;">
<div class="PlainText">All,<br>
<br>
I've been doing some various performance tests on a small
lustre <br>
filesystem and there seems to be a consistent bottleneck of
~700MB/s per <br>
socket involved.<br>
<br>
We have 6 servers with 2 Intel E5-2695 chips in each.<br>
<br>
3 servers are clients, 1 is MGS and 2 are OSSes with 1 OST
each. <br>
Everything is connected with 40Gb Ethernet.<br>
<br>
When I write to a single stripe, the best throughput I see
is about <br>
1.5GB/s. That doubles if I write to a file that has 2
stripes.<br>
<br>
If I do a parallel copy (using mpiio) I can get 1.5GB/s from
a single <br>
machine, whether I use 28 cores or 2 cores. If I only use 1,
it goes <br>
down to ~700MB/s<br>
<br>
Is there a bandwidth bottleneck that can occur at the socket
level for a <br>
system? This really seems like it.<br>
<br>
<br>
Brian Andrus<br>
<br>
_______________________________________________<br>
lustre-discuss mailing list<br>
<a class="moz-txt-link-abbreviated" href="mailto:lustre-discuss@lists.lustre.org">lustre-discuss@lists.lustre.org</a><br>
<a
href="http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org"
moz-do-not-send="true">http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org</a><br>
</div>
</span></font>
</blockquote>
<br>
</body>
</html>