original articles, reproduced please specify: reproduced from the system technology non-amateur research
This article link address: qperf measuring network bandwidth and latency
When we are doing Web servers, we are usually concerned about the bandwidth and latency of the network. Because many of our protocols are request-reponse protocols, latency determines the maximum QPS, and bandwidth determines the maximum load. Usually we know what type of network card we have, what model the switch is, what is the physical distance between the hosts, and theoretically what the bandwidth and latency are. However, the real situation is that the real bandwidth and latency will have a lot of variables, such as network card driver, switch hop count, packet loss rate, protocol stack configuration, the actual speed of light has greatly affected the estimation of the value. So we need to find a tool to actually measure it.
There are a lot of tools for network measurement, and Netperf are good. I recommend this qperf, which is the Rhel 6 release, it comes with, so it is very convenient to use, as long as simple:
Yum Install Qperf
Just fine.
Let's look at the introduction of man Qperf:
Qperf measures bandwidth and latency between, nodes. It can work over TCP/IP as well as the RDMA transports. On one of the nodes, Qperf are typically run with no arguments designating it the server node. One may then run Qperf on a client node to obtain measurements such as bandwidth, latency and CPU utilization.
In its more basic form, Qperf is run on one node in server mode by invoking it with no arguments. On the-the other node, it's run with a arguments:the name of the server node followed by the name of the the test. A list of tests can found in the section, tests. A variety of options also be specified.
It's also fairly simple to use:
Run Qperf on one of the machines, without any parameters, this machine acts as a server role:
2.6.32-131.21.1.tb477.el6.x86_64 |
Run Qperf on another machine, measure TCP bandwidth and latency, and look at the configuration of both machines:
$ qperf 10.232.64.yyy tcp_bw tcp_lat conf |
loc_cpu = 16 Cores: Intel Xeon L5630 @ 2.13GHz |
loc_os = Linux 2.6.32-131.21.1.tb477.el6.x86_64 |
rem_cpu = 16 Cores: Intel Xeon L5630 @ 2.13GHz |
rem_os = Linux 2.6.32-131.21.1.tb477.el6.x86_64 |
Isn't it convenient? Typically our bandwidth is 118M and the latency is 32us, which is expected in the standard thousand M environment.
Of course Qperf has a lot of advanced parameters, you can set the size of the socket buffer, bind CPU affinity, and so on, a great feature is to observe the critical point by continuously changing the value of an important parameter:
-oo,–loop VAR:INIT:LAST:INCR
Run a test multiple times sequencing through a series of values. Var is the loop variable;
Init is the initial value; Last was the value it must not exceed and Incr are the increment. It
is useful to set the–verbose_used (-VU) option in conjunction with this option.
For example, we can observe the change of bandwidth and delay by changing the size of the message (msg_size), for example, from 1 bytes to 64K, each time, to demonstrate the following:
$ qperf -oo msg_size:1:64K:*2 10.232.64.yyy tcp_bw tcp_lat |
We can see that when the packet size reaches 64 bytes, the bandwidth goes up, and when the packet reaches 1K, the delay changes a lot. These critical points are very helpful in the performance estimation and anticipation of our server programming.
Qperf In addition to measuring TCP, can also test RDMA, UDP, SCTP and other mainstream network protocol bandwidth and latency, is a very new tool, recommended for everyone to use.
Have a good time!
[Turn]qperf to measure network bandwidth and latency