A Free Trial That Lets You Build Big!
Start building with 50+ products and up to 12 months usage for Elastic Compute Service
Analysis Report of front-end and cloud performance analysis tools
The main function of the performance testing tool is to simulate real business operations in the production environment and perform stress load testing on the tested system, monitor the performance of the tested system under different services and different pressure, and identify potential performance bottlenecks for analysis and optimization.
The client and server communicate with each other through information. Due to the embarrassment of direct communication during the first meeting, the client sent the information to the contact and sent it to the server. Then, the server's feedback is also forwarded to the client by the transmitter. Generally, performance testing tools require recording or compiling client behavior scripts.
In this way, the passer has the client's ability to act and impersonate the client to cheat the server and communicate with it. With the client behavior, data transfer personnel can perform self-replication. In this way, more than N data transfer personnel can communicate with the server. -The behavior and ability of this pass-through tool are also the basic characteristics of the performance testing tool. (The performance tool suddenly seems like a third party, and it is a third party that can replicate itself crazy and abnormal, haha !)
For popular performance testing tools, their basic working principles are consistent. Simulate virtual user access on the client through multiple threads or multi-process, apply pressure on the server, and then monitor and collect performance data during the process.
Performance testing tools should have the following characteristics:
1. The tool itself occupies less system resources, provides good scalability and high availability.
2. It can simulate real business transaction operations and generate real business pressure during concurrency. (This is the core)
3. The stress test results can be well analyzed to quickly identify the bottlenecks of the tested system.
4. The test script is highly repetitive.
Comparison between front and back ends:
Web applications are based on Hypertext Transfer Protocol (HTTP) and Hypertext Transfer Protocol (HTML). HTTP is a non-connection-oriented protocol, HTML is a simple markup language used to create hypertext documents.
For a page, "request" and "returned data" may occur multiple times. Here is a simple example of what I need to know before performing a performance test. HTTP defines and limits the number and Cache of concurrent requests for downloading resources from the browser, as well as the HTML processing process. It can be said that a considerable part of the response time that the user feels does not depend entirely on the time required for background processing of the application, but on the front-end of the web application. In yahoo, up to 50 teams reduced the response time of end users by more than 25% through purely frontend performance-related skills.
HTTP is an object-oriented protocol at the application layer. It is used to transmit WWW Data. The client sends a request to the server using the Request \ response model, the request header contains the request method, URI, Protocol version, and a message structure similar to HTML containing the request modifier, customer information, and content. The server responds with a status line. The response content includes the version of the Message Protocol, success or error code, and contains the server information, entity metadata, and possible entity content.
HTML is a simple markup language used to create hypertext documents. hypertext documents written in HTML can be independent of various operating systems. Since its birth, the HTML language has been used to describe the web page format design. files described in the HTML language must be displayed in the WWW browser.
The following two methods are used to compare the differences between the two test response times.
Apache benchmark (AB) is a famous and compact stress testing tool.
After downloading and installing apache web server, install or decompress apache web server, there is an AB execution file in the bin \ directory.
Open "Run-cmd", open the command prompt, and go to the "bin \" directory.
AB-c [concurrent users]-n [number of sent requests] [URL of the tested Page]
Set a user request to pressurize the Baidu homepage:
From the preceding table, we can see that the total number of bytes in the request is 8024 bytes, and the response time is 0.173 seconds, that is, the following 173.010 milliseconds.
Firebug is a well-known debug tool and the most proud integration tool of firefox.
On the firefox menu bar, choose "Tools"> "add components"> "Search for firebug" to download, install, and restart the browser.
Access to the Baidu homepage:
The request size is 10 KB and the response time is 1.4 seconds. It is clearly found that this data can be much larger than the data obtained by the AB tool. Observe carefully and find that the data given by firebug, when accessing the http://www.baidu.com/URL, the data interaction between the client (browser) and the application is not once, but 5 times.
We will analyze one of the requests. The images in firefox contain red and blue lines. Blue indicates that the DOMContentLoaded event has occurred until now. The red line indicates that the onload event is triggered. DOMContentLoaded event is a standard event recommended by W3C. It occurs when the DOM tree of the page is built, while onload occurs in all resources of the page (image files, CSS files, js files, etc) after the download is complete.
In the lower-right corner, we will get two response times, 1.41 seconds is the time when the onload event is triggered, and the first 1.4 seconds is the total time required for all requests on the page to be returned. So which time is the response time that the user feels? To be accurate, neither of them is true. Users' feelings are an uncertain state, depending on the page type and presentation method. If a page provides only the reading information for the user, the user starts to read the content once it appears. Therefore, the user thinks that the response time is the readable information that appears on the page when the request is sent. If a page contains a large amount of interactive content, you need to fill in or drag and drop the page. In this case, only when all the elements of the page are correctly displayed, after all the js files have been executed, the user will feel that the page is ready.
The Research on Web Front-end performance is not to accurately obtain a response time data. In fact, according to the results of the friebug chart, the web performance depends partly on the web server and the Application Server (establish a connection, download connection). The other part depends on the implementation mechanism of the browser and the execution of js on the web page. The response time of the web server and Application Server depends on the server load and pressure; depending on the browser implementation mechanism, the time required for js File Execution is almost irrelevant to the server load and pressure. Therefore, the web-side response time is also part of the total response time, so it is necessary to understand the web-side performance.
So why do we need to perform backend performance tests if the front-end performance is so effective? Because of their different concerns, front-end performance focuses on the feelings of a single user. Backend performance is concerned that when more users access the system, the server can process user requests more stably and faster. A powerful background is the foundation of the front-end.
Performance testing has always been a very important part of Web applications. At present, most people focus on performance tests on the server. When talking about performance tests, most people will focus on performance tests and tuning on the server, that is, find the performance bottleneck of the server through various methods and try to tune it. But in fact, for web applications, in addition to considering that the server returns page data within a short enough time, you can also consider performance testing and performance tuning from the perspective of the page front-end.
Online site performance evaluation site, address http://www.webpagetest.org/
In fact, this website is also an open-source project, so it supports building an internal test site.
Independent program class:
DynaTrace Ajax Edition
Plug-ins Based on IE and firefox require version support for FF and independent installation files (over 50 MB ). It supports function-level measurement analysis. In addition, all functions supported by other tools are supported by this tool.
What is dynaTrace Ajax?
DynaTrace Ajax currently has two versions: the free version and the commercial version. For the differences between them, see the version comparison. This article focuses on the introduction of the free version. Versions earlier than 3.0 can only run in IE browsers, including IE6, IE7, and IE8. After version 3.0 Beta, it can support performance tracking on both IE and Firefox browsers.
Application Case Analysis
The record below is a case in an actual project (IBM Docs) we are currently developing-open a PPT document on the Web, analyze the performance problems based on the information collected by dynaTrace.
Performance Report (Performance Report view)
Open the Performance Report view from the Cockpit panel ,:
Figure. Performance Report
The detailed information of all accessed webpages is recorded in the performance report view. The following information is displayed:
Time taken to load the page: OnLoad Time [MS] displays the Time taken to load from the page to the browser to dispatch the onload event; total Load Time [MS] displays the Total Time consumed by all load on the page
Network request time: from the Remark, you can see the total number of requests and the number of XHR requests.
Time consumed by the Server: On Server [MS] refers to the total time consumed by the client to start responding to all requests sent On the Server
You can obtain the Overall Performance Analysis Report from the lower-right panel (for more details, see the corresponding nodes in the Cockpit panel). For example:
The NetWork shows how many resources are read from the browser cache, and how many HTTP forwarding requests consume unnecessary NetWork transmission time; how long the network transmission time can be saved by merging CSS and JS requests in the same domain.
In my example, the following content caught my attention:
The Network takes a long time and the number of requests is too large: there are a total of 896 network requests, of which 300 + are requests for images, and 300 + are read from the cache for the same image.
Server processing took 20 seconds in total: This indicates that the Server may also have Performance problems. We recommend that you use the Performance Inspector tool to analyze Server Performance problems.
The Remark column also displays a total of 23 XMLHttpRequest requests sent by the page: you can find the time point from the event line of the timeline. The next section will discuss these issues in more detail.
Timeline (Timeline view)-page Lifecycle
In this view, we can observe that:
Rendering in the browser, hovering over it can be found that most of the time is required for layout calculation. Generally, the execution on IE is relatively obvious.
Parallel download of network requests is time-consuming. On the one hand, there are too many requests. One of the obvious reasons is that XMLHttpRequest takes nearly 7 seconds to process the Server.
The Event axis displays mouse click events, XMLHttpRequest events, and OnLoad events.
Enlarge the long network request time section on the right (in my example, the time slice ranges from 16 s to 29 s), and click the left mouse button at the start to drag the end to release the mouse, the view is enlarged to the time slice shown below, as shown in:
Figure. Zoom in the timeline
Right-click the time slice to zoom in and select "Drill Down to Timeframe e" to Go to The PurePath view. All the activities on the current time slice are displayed.
DynaTrace Ajax is a very useful and important tool for front-end software development engineers and performance analysts. The tool is constantly updated and features are constantly powerful. The supported browsers are constantly increased and combined with continuous integration tools, in this way, the performance of applications on different browsers can be found more easily, earlier, and more frequently.
Regardless of the evaluation tool, the basic technology is to use Thread Technology to simulate and virtual users. The main difficulty here is the writing of test scripts. The scripts used by each tool are different, however, most tools provide the recording function, which can be tested by testers without coding.
As we all know, servers are the core of the entire network system and computing platform. Many important data is stored on servers, and many network services are running on servers, therefore, the performance of the server determines the performance of the entire application system.
Currently, there are many types of servers of different brands and types on the market. When purchasing a server, how can users choose the desired Server products suitable for their own applications from a variety of models, it is not enough to identify the configuration. It is best to use the actual test to filter. There are many types of evaluation software. Which software test should you choose? The following describes some typical test tools:
(1) server system performance testing tools
The performance of a server system can be divided by processor, memory, storage, and network. for different applications, the performance requirements for some parts may be higher.
(2) test tools for Applications
With the increase of web applications, more and more Web-centric applications are used in server application solutions. Many companies mainly use web applications in various application architectures. The focus of general web tests is not exactly the same as that of previous applications. After the basic functions have passed the tests, it is necessary to perform important system performance tests. The system performance is a large concept with a wide coverage. For a software system, it includes execution efficiency, resource usage, stability, security, compatibility, and reliability, the following describes how to test the server system performance in terms of load pressure. The load and pressure of the system must be carried out using the load testing tool. A certain number of users are virtualized to test the system performance and check whether the system meets the expected design indicator requirements. The purpose of the load test is to test the corresponding output items of the system components when the load increases gradually, such as the excessive traffic, response time, CPU load, and memory usage, to determine the system performance, for example, stability and response. Load Testing is generally completed using tools such as Loadrunner, Webload, and QALoad. The main content is to compile the test script, which generally includes common functions of users, and then run it to generate a report. Use stress testing tools to perform stress testing on web servers. The test can help you find some large problems, such as crashes, crashes, and memory leaks. Some programs with memory leaks may not encounter problems once or twice, however, if tens of thousands of times are run, more and more memory leaks will cause system slide.
AB is a performance testing tool for Apache Hypertext Transfer Protocol (HTTP. It is designed to describe the execution performance of the currently installed Apache, mainly to show how many requests can be processed per second by your installed Apache.
AB [-A auth-username: password] [-c concurrency] [-C cookie-name = value] [-d] [-e csv-file] [-g gnuplot-file] [-h] [- H custom-header] [-I] [-k] [-n requests] [-p POST-file] [-P proxy-auth-username: password] [-q] [-s] [-S] [-t timelimit] [-T content-type] [-v verbosity] [-V] [-w] [-x-Attributes] [-X proxy [: port] [-y-attributes] [-z
|-Attributes] [http: //] hostname [: port]/path|
-A auth-username: password
Provides BASIC authentication trust to the server. The user name and password are separated by one and sent in base64 encoding format. This string is sent regardless of whether the server needs it (that is, whether the 401 authentication request code is sent.
The number of requests generated at a time. The default value is one at a time.
-C cookie-name = value
Append a Cookie to the request: Row. The typical form is a parameter pair of name = value. This parameter can be repeated.
Messages that do not display "percentage served within XX [MS] table" (supported for previous versions ).
Generate a comma-separated (CSV) file that contains the time (in a subtle unit) required to process each percentage of requests (from 1% to 100%. This format has been "binary", so it is more useful than the 'gnupload' format.
Write all test results to a 'gnupload' or TSV (separated by tabs) file. This file can be easily imported to Gnuplot, IDL, Mathematica, Igor or even Excel. The title of the first behavior.
Append additional header information to the request. A typical form of this parameter is a valid header information line, which contains pairs of fields and values separated by colons (for example, "Accept-Encoding: zip/zop; 8bit ").
Execute the HEAD request instead of GET.
Enable the HTTP KeepAlive function, that is, to execute multiple requests in an HTTP session. By default, KeepAlive is not enabled.
The number of requests executed in the test session. By default, only one request is executed, but the result is usually not representative.
The file that contains the data to be POST.
-P proxy-auth-username: password
Provides BASIC authentication trust for a transit proxy. The user name and password are separated by one and sent in base64 encoding format. This string is sent regardless of whether the server needs it (that is, whether the 401 authentication request code is sent.
If the number of requests processed exceeds 150, AB will output a progress count in stderr every time it processes about 10% or 100 requests. This q mark can suppress this information.
It is used for compiling (AB-h displays relevant information) using SSL protected https instead of http. This function is experimental and simple. It is best not to use it.
The mean value and standard deviation value are not displayed, and no warning or error message is displayed when the mean value and the mean value are one to two times the standard deviation value. By default, the minimum, average, maximum, and other values are displayed. (To support previous versions ).
The maximum number of seconds for testing. The implicit value is-n 50000. It limits the server test to a fixed total time. By default, there is no time limit.
The Content-type Header used by the POST data.
Set the details of the display information-4 or greater. The header information is displayed. 3 or a greater value can display the response code (404,200, etc.). 2 or a greater value can display warnings and other information.
Show the version number and exit.
Output results in HTML table format. By default, it is a table of the width of the two columns of the white background.
Tar zxvf http_load-12mar2006.tar.gzcd http_load-12mar2006make & make install
Command Format: http_load-p concurrent access process count-s URL file to be accessed during access time
Parameters can be freely combined, and there is no limit between parameters. For example, if you write http_load-parallel 5-seconds
300 urls.txt is also supported. We will briefly describe the parameters.
-Parallel-p indicates the number of concurrent user processes.
-Fetches short-f: indicates the total number of visits
-Rate abbreviation-p: indicates the Access frequency per second.
-Seconds-s: indicates the total access time.
Prepare the URL File: urllist.txt. The file format is one URL per line. It is better to have more than 50-URLs. The file format is better.
Http_load-p 30-s 60 urllist.txt
The parameters are understood. Let's run a command to see its return results.
Command: %./http_load-rate 5-seconds 10 urls indicates that a test lasts for 10 seconds and the frequency is 5.
49 fetches, 2 max parallel, 289884 bytes, in 10.0148 seconds5916 mean bytes/connection4.89274
Fetches/sec, 28945.5 bytes/secmsecs/connect: 28.8932 mean, 44.243 max, 24.488 minmsecs/first
-Response: 63.5362 mean, 81.624 max, 57.803 minHTTP response codes: code 200-49
1.49 fetches, 2 max parallel, 289884 bytes, in 10.0148 seconds
In the preceding test, 49 requests are run, the maximum number of concurrent processes is 2, the total data transmitted is 289884 bytes, and the running time is 10.0148 seconds.
2.5916 mean bytes/connection indicates that the average data volume transmitted by each connection is 289884/49 = 5916
3.4.89274 fetches/sec, 28945.5 bytes/sec
The number of response requests per second is 4.89274, and the amount of data transmitted per second is 28945.5 bytes/sec.
4. msecs/connect: 28.8932 mean, 44.243 max, 24.488 min indicates that the average response time for each connection is 28.8932 msecs
The maximum response time is 44.243 msecs, and the minimum response time is 24.488 msecs.
5. msecs/first-response: 63.5362 mean, 81.624 max, 57.803 min
6. HTTP response codes: code 200-49 indicates the type of the response page to be opened. If 403 has too many types, it may
Check whether the system has encountered a bottleneck.
The main indicators in the test results are the fetches/sec and msecs/connect options, that is, the number of queries that the server can respond to per second,
This indicator is used to measure performance. It seems to be more accurate and persuasive than apache AB.
Qpt-number of responding users and response time per second, and the response time per connection.
The test results mainly refer to these two values. Of course, only these two indicators can not complete the performance analysis, we also need
Cpu and men analysis to draw a conclusion
Start building with 50+ products and up to 12 months usage for Elastic Compute Service