The impact of bandwidth in performance testing

Source: Internet
Author: User

When you are performing a performance test, you should consider limiting the bandwidth of your test if you find that the metrics on the server are relatively low. The following are some of the problems encountered with a performance test

Performance Tuning points:

1, dynamic page static

2, network IO and disk IO comparison

3.

These two days in the tuning of the server to do performance testing , in one of the details page of the stress test, the test result is 110TPS, for this result we are very dissatisfied, and then under a number of different modules to test, the results are very similar, However, during the stress testing process, the server resource consumption is very low, so we can see that the server is far from the limit of pressure, and the application should be no problem, if it is a program problem, the server resources will not have so much idle.

Where does the problem really go? The architecture of our Web server is nginx+lighttpd, so we test each layer, the results are still the same, the 4-core server CPU resource loss is only about 20%, we have a single static page stress test, found that the results are still similar, Not much of a promotion. Both dynamic and static pages, the results are the same, and this is a more positive conclusion. Excluding the application problem, the problem should be in the IO block, and then through disk monitoring, found that the loss of disk IO is very low, but we have an unexpected discovery, the results of multiple tests, network IO traffic is around 12M.

See here, our hearts are enlightened, the problem is here, we have been the focus on the server itself performance tuning, but forgot the network bandwidth factors, although we test the server's network card is 100/1000m network card, but we are the server deployed in the Hundred trillion LAN, The actual transmission speed of 100M bandwidth is just about 12M, because we finally found that the bottleneck is the network bandwidth, rather than the performance of the server itself. So apart, we immediately connect all the test servers and clients to the same 1000 gigabit network, minimizing network bandwidth limitations.

Successfully set up the environment, and then test, the results are as we expected, static page test, the fastest module test reached 600TPS on-line, and at this time the network card traffic has reached about 50M, and because the page size of multiple modules is not the same, TPS results are different, But the network card traffic is in the 50M up and down, however, the theoretical speed of 1000M network can reach 120M or so, 50M the actual speed is far from this number, may not reach the theoretical speed, but the difference should not be too much, So we tried to copy large files between the two servers to test the actual speed of how much, and the end result is the same as we test the results of the Web server, but also in the 50M, so once again proved that IO is a bottleneck, but can not determine whether it is disk IO or network IO, Because of the tight time, we do not do more entanglement on this issue, because this is only a static page test, the actual operation of the process, the real bottleneck should be in the application, so we again shifted the focus to the dynamic page request.

Once again on the dynamic page of stress testing, this time we measured the average result of each module is 200TPS up and down, and the network traffic is only about 30M, at this time the server resource occupancy rate has been in the 80-90%, the basic is in full load state, pure dynamic page 200TPS, not high, The program itself has a lot of space to optimize, but after 51 will be formally on-line, at this time again optimization is too late, but for pure dynamic request 200TPS results, in fact, I have been relatively satisfied, because considering the actual application scenario, in the case of the most likely to appear peak, The actual majority of users will only access a few of the same page, and our request for the same page has been statically processed, in fact, in addition to the first request, the other requests are directly to the static page, so the actual pressure to withstand much more than this.

The impact of bandwidth in performance testing

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.