How do I generate HTTP requests for millions per second?

Source: Internet
Author: User
Tags benchmark free ssh passwordless ssh

This article is the first of a series of high-performance Web clustering articles that can process 3 million of requests per second. It records some of my experiences using the load generator tool and hopefully it will save time for everyone who has to use these tools like me.

The load generator is a number of programs that generate traffic for testing. They can show you the performance of the server under high load and allow you to identify possible problems with the server. Understanding server weaknesses through load testing is a good way to test server resiliency and prepare for a rainy day.

Load Generation tool (Load-generating tools)

There is one important thing to keep in mind when conducting tests: How many sockets you can build on Linux. This restriction is hard coded in the kernel, the most typical of which is the temporary W port limit. (To some extent) you can expand it in the/etc/sysctl.conf. But basically, a Linux machine can only open about 64,000 sockets at a time. So in a load test, we had to make the most of the socket by making as many requests as possible on a single connection. In addition, we need more than one machine to generate the load. Otherwise, the load generator will cause the available socket occupancy to fail to generate enough load.

I started with ' AB ', Apache Bench. It is the simplest and most versatile of the HTTP benchmark tools I know. And it is a product that comes with Apache, so it may already exist in your system. Unfortunately, I can only generate about 900 requests per second when I use it. Although I have seen other people use it to reach 2,000 requests per second, I can tell you immediately that ' AB ' is not suitable for our benchmark test.

Httperf

Then I tried the ' Httperf '. This tool is more powerful, but it is still relatively simple and functionally limited. It is not easy to figure out how many requests are produced per second than just passing parameters. After several attempts, I get more than hundreds of requests per second. For example:

It creates 100,000 sessions (session) at a rate of 1,000 per second. Each session initiates 5 requests with a time interval of 2 seconds.

Httperf--hog--server=192.168.122.10--wsess=100000,5,2--rate 5--timeout
Total:connections 117557 Requests 219121 replies 116697 Test-duration 111.423 sconnection rate:1055.0 conn/s (0.9 ms/con  N, <=1022 concurrent connections) Connection time [MS]: Min 0.3 avg 865.9 Max 7912.5 median 459.5 stddev 993.1Connection Time [MS]: Connect 31.1Connection length [Replies/conn]: 1.000Request rate:1966.6 req/s (0.5 ms/req) Request size [B]: 91 .0Reply rate [replies/s]: Min 59.4 avg 1060.3 Max 1639.7 StdDev 475.2 (samples) Reply time [MS]: Response 56.3 Transfer 0.0Reply size [B]: Header 267.0 content 18.0 footer 0.0 (total 285.0) Reply status:1xx=0 2xx=116697 3xx=0 4xx=0 5xx=0cpu t IME [s]: User 9.68 system 101.72 (user 8.7% system 91.3% total 100.0%) Net I/O: 467.5 kb/s (3.8*10^6 bps)

Finally, I used these settings to reach 6,622 connections per second:

Httperf--hog--server 192.168.122.10--num-conn 100000--ra 20000--timeout 5

(a total of 100,000 connections were created and created at a fixed rate of 20,000 connections per second)

It also has some potential advantages, and has more features than ' AB '. But it's not the heavyweight tool I'm going to use in this project. What I need is a tool that can support distributed multi-load test nodes. So, my next attempt is: Jmeter.

Apache Jmeter

This is a full-featured Web application test suite that simulates all the behavior of real users. You can use Jmeter's agent to visit your website, click, log in, imitate all the actions that the user can do. Jemeter will record these behaviors as test cases. Then Jmeter will perform these actions repeatedly to simulate the number of users you want. Although the configuration of Jmeter is much more complex than ' ab ' and ' httperf ', it is a very interesting tool!

According to my test, it can generate 14,000 requests per second! This is definitely a good development.

I used some plug-ins on Googlle Code project and used their "stepping Threads" and "HTTP RAW" requests, resulting in about 30,000 requests per second! But this has reached its limit, so look for another tool. Here's one of my previous Jmeter configurations, hoping to help others. Although this configuration is far from perfect, it can sometimes meet your requirements.

Tsung: Heavy-duty (heavy-duty), distributed, Multi-protocol testing tools

It can basically generate 40,000 requests per second, which is definitely the tool we want. Similar to Jmeter, you can record some of the behavior and run it at test time, and you can test most protocols. such as SSL, HHTP, WebDAV, SOAP, PostgreSQL, MySQL, LDAP, and Jabber/xmpp. Unlike Jmeter, it has no confusing GUI settings, it has only one XML configuration file, and some SSH keys for the distributed nodes you choose. Its simplicity and efficiency appeal to me exactly as much as its robustness and extensibility. I found it to be a very powerful tool, and in the right configuration it can generate millions HTTP requests per second.

In addition, Tsung can generate graphs on HTML and enter detailed reports of your tests. The results of the test are easy to understand, and you can even show the pictures to your boss!

In the remainder of this series, I will also explain this tool. You can now proceed to the following configuration instructions, or skip to the next page.

Installing Tsung on CentOS 6.2

First, you will install the EPEL source (required by Erlang). Therefore, install it before proceeding to the next step. After installation, continue to install the packages you need for each node you use to generate the load. If you have not established a password-free SSH key (passwordless ssh key) between the nodes, please create it.

Yum-y Install Erlang perl perl-rrd-simple.noarch perl-log-log4perl-rrds.noarch gnuplot perl-template-toolkit Firefox

Download the latest Tsung from Github or Tsung's official website.

wget http://tsung.erlang-projects.org/dist/tsung-1.4.2.tar.gz

Unzip and compile.

Tar zxfv  tsung-1.4.2.tar.gzcd tsung-1.4.2./configure && make && make install

Copy the sample configuration into the ~/.tsung directory. This is where Tsung's configuration files and log files are stored.

CP  /usr/share/doc/tsung/examples/http_simple.xml/root/.tsung/tsung.xml

You can edit this configuration file according to your needs, or use my configuration file. After a lot of attempts and failures, my current configuration file can generate 5 million HTTP requests per second when using 7 distributed nodes.

<?xml version= "1.0"? ><! DOCTYPE Tsung SYSTEM "/usr/share/tsung/tsung-1.0.dtd" ><tsung loglevel= "notice" version= "1.0" ><clients ><client host= "localhost" weight= "1" cpu= "ten" maxusers= "40000" ><ip value= "192.168.122.2"/></ Client><client host= "Loadnode1" weight= "1" cpu= "9" maxusers= "40000" ><ip value= "192.168.122.2"/></ Client><client host= "Loadnode2" weight= "1" maxusers= "40000" cpu= "8" ><ip value= "192.168.122.3"/></ Client><client host= "Loadnode3" weight= "1" maxusers= "40000" cpu= "9" ><ip value= "192.168.122.21"/>< /client><client host= "Loadnode4" weight= "1" maxusers= "40000" cpu= "9" ><ip value= "192.168.122.11"/> </client><client host= "Loadnode5" weight= "1" maxusers= "40000" cpu= "9" ><ip value= "192.168.122.12"/ ></client><client host= "Loadnode6" weight= "1" maxusers= "40000" cpu= "9" ><ip value= "192.168.122.13" /></client><client host= "Loadnode7" weight= "1" maxusers= "40000" cpu= "9" ><ip value= "192.168.122.14"/></client></clients><servers>< Server host= "192.168.122.10" port= "type=" tcp "/></servers><load><arrivalphase phase=" 1 " duration= "unit=" minute "><users maxnumber=" 15000 "arrivalrate=" 8 "unit=" second "/></arrivalphase> <arrivalphase phase= "2" duration= "ten" unit= "minute" ><users maxnumber= "15000" arrivalrate= "8" unit= "second"/ ></arrivalphase><arrivalphase phase= "3" duration= "unit=" minute "><users maxnumber=" 20000 " Arrivalrate= "3" unit= "second"/></arrivalphase></load><sessions><session probability= "100 "Name=" AB "type=" ts_http "><for from=" 1 "to=" 10000000 "var=" i "><request> 

There are a lot of things to understand at the beginning, but once you understand them, it becomes easy.

  • <client> simply specifies the host that is running Tsung. You can specify the maximum number of IPs and CPUs used by Tsung. You can use Maxusers to set the maximum number of users the node can impersonate. Each user will perform the actions we define later.
  • <servers> Specifies the HTTP server you want to test. We can use this option to test an IP cluster, or a single server.
  • <load> defines when our demo users will "arrive" at our site. And how fast they are reaching.
    • <arrivalphase> reached 15,000 users at the rate of 8 users per second in the first phase, which lasted 10 minutes.
    • <arrivalphase phase= "1″duration=" 10″unit= "Minute" >
    • <users maxnumber= "15000″arrivalrate=" 8″unit= "second"/>
    • There are also two arrivalphases, and their users are reaching in the same way.
    • Together, these arrivalphases form a <load>, which controls how many requests we can generate per second.
  • <session> This section defines what actions they will perform once these users reach your site.
  • Probability allows you to define random events that users might do. Sometimes they may click here, and sometimes they may click there. All probability add up to equal to 100%.
  • In the above configuration, the user only does one thing, so its probability equals 100%.
  • <for from= "1″to=" 10000000″var= "I" > This is what users do in 100% of the time. They loop through 10,000,000 times and <request> a Web page:/test.txt.
  • This loop structure allows us to use a small number of user connections to get the larger requests per second.

Once you have a good understanding of them, you can create a convenient alias to quickly observe the Tsung report.

Vim ~/.bashrcalias treport= "/usr/lib/tsung/bin/tsung_stats.pl; Firefox report.html "
SOURCE ~/.BASHRC

Then start Tsung

[[Email protected] ~] Tsung startstarting Tsung "Log directory is:/root/.tsung/log/20120421-1004"

Post-closing observation report

Cd/root/.tsung/log/20120421-1004treport

Use Tsung to plan your cluster construction

Now that we have a strong enough load test tool, we can plan the remaining cluster constructs:

1. Use Tsung to test a single HTTP server. Gets a basic baseline.

2. Tune the Web server to perform regular testing with Tsung to improve performance.

3. Tune the TCP sockets for these systems to get the best network performance. Try again, test, test, keep testing.

4. Construct the LVS cluster, which contains these fully tuned Web servers.

5. Use the Tsung IP cluster to perform a stress test on the LVS.

In the next two articles, I'll show you how to get the most out of your Web servers and how to integrate them with the LVS cluster software.

How do I generate HTTP requests for millions per second?

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.