Rotten mud: Application of apache Performance Testing Tool AB and apache Performance Testing

Source: Internet
Author: User
Tags website performance

Rotten mud: Application of apache Performance Testing Tool AB and apache Performance Testing

Apache vs nginx performance 2017

This article was sponsored by Xiuyi linfeng and first launched in the dark world.

Website performance stress testing is an essential part of the performance tuning process for servers. Only when the server is under high pressure can the problems caused by improper settings such as software and hardware be truly reflected.

Currently, the most common performance testing tools are AB, http_load, webbench, and siege. Today we will introduce AB specifically.

AB is an apache stress testing tool. AB is very practical. It can not only perform Website access stress tests on apache servers, but also perform stress tests on other types of servers. For example, nginx, tomcat, and IIS.

The following describes how to use the AB command: apache camel performance

Apache performance testing tools

1. Principle of AB

2. Installation of AB

3. AB parameter description

4. AB performance indicators

5. Actual use of AB

6. Test nginx Performance

I. Principle of AB

AB is the abbreviation of the apacheworkflow command.

Principle of AB: the AB command creates multiple concurrent access threads to simulate simultaneous access to a URL by multiple visitors. It is URL-based. Therefore, it can be used to test the load pressure of apache, nginx, lighthttp, tomcat, IIS, and other Web servers.

Apache tuning for high performance

The AB command has very low requirements on the computer that issues the load. It neither occupies a high CPU nor occupies a lot of memory. But it will cause a huge load on the target server. The principle is similar to CC attacks. You should also pay attention to the test. Otherwise, there will be too much load at a time. This may cause the resources of the target server to be exhausted. In severe cases, the server may even crash.

Ii. Installation of AB

The installation of AB is very simple. If it is the source code to install apache, it will be simpler. After apache is installed, the AB command is stored in the bin directory of the apache installation directory. As follows:

/Usr/local/apache2/bin

Nginx vs apache performance graph

If apache is installed using the yum RPM package, the AB command is stored in the/usr/bin directory by default. As follows:

Which AB


Note: If you do not want to install apache but want to use the AB command, you can directly install the apache toolkit httpd-tools. As follows:

Yum-y install httpd-tools


To check whether the AB installation is successful, you can switch to the preceding directory and run the AB-V command to check the installation. As follows:

AB-V


If AB is successfully installed, the AB-V command will display the active version of AB, as shown in.

Note that the above is installed on the linux platform. If it is on the windows platform, we can also download the corresponding apache version for installation.

Currently, the latest version of apache is 2.4.10. However, you can download the integration software package provided on the apache official website, as follows:


Iii. AB parameter description

You can use the help command to view the usage of the AB command. As follows:

AB -- help


The following describes these parameters. As follows:

-N indicates the number of requests executed in the test session. By default, only one request is executed.

-C: the number of requests generated at a time. The default value is one at a time.

-T indicates the maximum number of seconds for testing. The internal implicit value is-n 50000, which limits the server test to a fixed total time. By default, there is no time limit.

-P contains the file that requires POST data.

-P provides BASIC authentication trust for a transit proxy. The user name and password are separated by one and sent in base64 encoding format. This string is sent regardless of whether the server needs it (that is, whether the 401 authentication request code is sent.

-T post the Content-type Header used by the data.

-V sets the details of the display information.-4 or a greater value indicates the header information. 3 or a greater value indicates the response code (404,200, etc ), 2 or a greater value can display warnings and other information.

-V displays the version number and exits.

-W outputs results in HTML table format. By default, it is a table of the width of the two columns of the white background.

-I: Execute the HEAD request instead of GET.

-X sets the string of the <table> attribute.

-X uses the proxy server for the request.

-Y: Specifies the string of the <tr> attribute.

-Z: Set the string of the <td> attribute.

-C attaches a Cookie to the request: Row. The typical form is a parameter pair of name = value, which can be repeated.

-H attaches additional header information to the request. A typical form of this parameter is a valid header information line, which contains pairs of fields and values separated by colons (for example, "Accept-Encoding: zip/zop; 8bit ").

-A provides BASIC authentication trust to the server. The user name and password are separated by one and sent in base64 encoding format. This string is sent regardless of whether the server needs it (that is, whether the 401 authentication request code is sent.

-H shows the usage method.

-D does not display messages for "percentage served within XX [MS] table" (supported for previous versions ).

-E generates a comma-separated (CSV) file, which contains the appropriate percentage (in subtle units) required for processing each percentage of requests (from 1% to 100%) time. This format has been "binary", so it is more useful than the 'gnupload' format.

-G writes all test results to a 'gnupload' or TSV (separated by tabs) file. This file can be easily imported to Gnuplot, IDL, Mathematica, Igor or even Excel. The title of the first behavior.

-I: Execute the HEAD request instead of GET.

-K enables the HTTP KeepAlive function, that is, multiple requests are executed in an HTTP session. By default, KeepAlive is not enabled.

-Q if the number of requests processed exceeds 150, each time AB processes about 10% or 100 requests, it will output a progress count in stderr. This q mark can suppress this information.

Iv. AB performance indicators

There are several important indicators during performance testing:

1. throughput (Requests per second)

The quantitative description of the server's concurrent processing capability, in reqs/s, refers to the number of requests processed per unit time under a number of concurrent users. The maximum number of requests that can be processed per unit time under a number of concurrent users is called the maximum throughput.

Remember: The throughput is based on the number of concurrent users. This sentence represents two meanings:

A. throughput and concurrent users

B. the throughput is generally different for different concurrent users.

Calculation formula: Total number of requests/time taken to process the number of completed requests, that is

Request per second = Complete requests/Time taken for tests

It must be noted that this value indicates the overall performance of the current machine. The larger the value, the better.

2. The number of concurrent connections)

The number of concurrent connections refers to the number of requests received by the server at a certain time point. Simply put, it is a session.

3. Number of concurrent users (Concurrency Level)

Note the difference between this concept and the number of concurrent connections. A user may have multiple sessions, that is, the number of connections. In HTTP/1.1, IE7 supports two concurrent connections, IE8 supports six concurrent connections, and FireFox3 supports four concurrent connections. Therefore, the number of concurrent users must be divided by this base.

4. Average user request wait Time (Time per request)

Calculation formula: time spent on processing all requests/(total number of requests/number of concurrent users), that is:

Time per request = Time taken for tests/(Complete requests/Concurrency Level)

5. Average Server request wait Time (Time per request: Wait SS all concurrent requests)

Calculation formula: the time/total number of requests processed, namely:

Time taken for/testsComplete requests

We can see that it is the reciprocal of the throughput.

At the same time, it is equal to the average user request wait time/number of concurrent users, that is

Time per request/Concurrency Level

V. Actual use of AB

AB has many command parameters. We often use the-c and-n parameters.

Next, we will create a new virtual host a.ilanni.com. As follows:

Cat/etc/httpd/conf/httpd. conf | grep-v ^ # | grep-v ^ $


Mkdir-p/www/a.ilanni.com

Echo '<? Php phpinfo ();?> '>/Www/a.ilanni.com/index.php

Cat/www/a.ilanni.com/index.php


After the VM is created, start apache and access the VM a.ilanni.com. As follows:

Wget http://a.ilanni.com



After the VM a.ilanni.com is created, we will test the apache performance now. Run the following command:

AB-c 10-n 100 http://a.ilanni.com/index.php

-C10 indicates that the number of concurrent users is 10.

-N100 indicates that the total number of requests is 100

Target URL of the request for http://a.ilanni.com/index.php?

This line indicates that 100 requests are processed simultaneously and the index. php file is run 10 times.


The test results are also clear. The throughput tested by apache is Requests per second: 204.89 [#/sec] (mean ).

In addition, there are other information, which should be described as follows:

Server Software indicates the name of the tested Web Server Software.

Server Hostname indicates the requested URL host name.

Server Port indicates the listening Port of the tested Web Server software.

Document Path indicates the absolute root Path in the request URL. With the extension name of the file, we generally know the type of the request.

Document Length indicates the body Length of the HTTP response data.

Concurrency Level indicates the number of concurrent users, which is one of the parameters we set.

Time taken for tests indicates the total Time it takes to process all these requests.

Complete requests indicates the total number of requests, which is one of the parameters we set.

Failed requests indicates the number of Failed requests. Failure here refers to exceptions in the connection to the server, sending data, and timeout after no response. If the header information of the received HTTP Response Data contains a status code other than 2XX, another statistic named Non-2xx responses is displayed in the test results, count the number of requests. These requests are not counted as failed requests.

Total transferred indicates the Total length of the response data of all requests, including the header information of each HTTP Response Data and the length of the body data. Note that the length of the HTTP request data is not included here, but only the total length of the application layer data from the web server to the user's PC.

HTML transferred indicates the sum of text data in the response data of all requests, that is, the length of the header information in the HTTP Response Data in Total transferred.

Requests per second throughput, formula: Complete requests/Time taken for tests

Time per request average request wait Time. formula: Time token for tests/(Complete requests/Concurrency Level ).

Time per requet (processing SS all concurrent request) server average request wait Time. formula: Time taken for tests/Complete requests, which is the reciprocal of throughput. You can also calculate the Time per request/Concurrency Level.

Transfer rate indicates the length of the data that these requests obtain from the server within the unit Time. formula: Total trnasferred/Time taken for tests. This statistics shows that the server's processing capability has reached the limit, the demand for outbound bandwidth.

Percentage of requests served within a certain time (ms) This Part of the data is used to describe the distribution of each request processing time, such as the above test, 80% of the request processing time does not exceed 6 ms, this processing Time refers to the previous Time per request, that is, the average processing Time of each request for a single user.

Vi. Testing nginx Performance

Step 5 tests the performance of apache. Now let's test the performance of nginx.

Configure the nginx virtual host as follows:

Cat/usr/local/nginx/conf/nginx. conf | grep-v ^ # | grep-v ^ $


After the VM is configured, we now access the VM. As follows:

Wget a.ilanni.com



Note that the VM is the same as the apache Vm and requests the same page.

Run the same commands as apache to test nginx:

AB-c 10-n 100 http://a.ilanni.com/index.php

The result is as follows:


The test result is clear at a glance. The throughput tested by nginx is Requests per second: 349.14 [#/sec] (mean ).

Comparing the throughput of apache requests to this page, we found that the nginx throughput is higher than that of apache. According to the performance indicators we mentioned aboveThe higher the Requests per second throughput, the better the server performance.

This also proves that nginx has a higher performance than apache.



Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.