Web Server Overview
Web systems are widely used in networks, while Web servers are an important part of web systems. The complete web structure should include: http protocol, web server, General Gateway Interface CGI, Web ApplicationProgramInterface and Web browser.
A Web Server is a program that resides on a certain type of computer on the Internet. It is a server that builds a basic platform for information publishing, data query, data processing, and many other applications based on HTTP of the information provider in the network. Its main function is to provide the online information browsing service. When a web browser (client) connects to the server and requests a file, the server processes the request and sends the file to the browser, the attached information tells the browser how to view the file (that is, the file type ).
The Web server can be divided into three steps in web page processing: Step 1: the Web browser sends a Web page request to a specific server; Step 2, after receiving a Web page request, the web server looks for the requested web page and sends the requested web page to the web browser. Step 3, the Web server receives the requested web page and displays it.
Web servers not only store information, but also run scripts and programs based on the information provided by users through web browsers. On the web, most common form core search engines use CGI scripts.
Factors affecting the performance of Web application servers
The performance of a Web server is the ability of a Web server to respond to user requests. The performance of a Web server is crucial to a web system. Many attempts have been made to improve the performance of web servers, and many technologies and methods have been adopted. However, div + CSS tutorials often lack applicability of these technologies and methods.
Through the analysis of previous studies, we can find that there are two main reasons for this problem in the web server optimizer: on the one hand, it is caused by server performance evaluation, on the one hand, the optimization solution is not fully considered.
When evaluating web servers, the current server performance evaluation tool simulates one or more computers to communicate with the tested Web servers, they actually constitute only a LAN environment, which is different from the real WAN environment.
In addition, although the evaluation tool is as close as possible to the actual load when selecting the network load, there is still a gap with the continuous high-frequency load requirements. Furthermore, the selection and analysis of performance testing indicators are not reasonable enough, resulting in unfair and reliable analysis results. When selecting the method to optimize the web server, we usually only consider the Web server and seldom integrate the specific application environment. Therefore, the evaluation results are not scientific enough, the application environment is not comprehensive enough, and the Web server performance optimization is not targeted. Therefore, to optimize the performance of web servers in a specific application environment, you need to consider the following two main factors: network characteristics and web load characteristics.
Network Features refer to the network conditions of the web server, whether it is a WAN or LAN, whether it is a high-speed network (the network with a transmission rate of more than 1 oomb/s is called a high-speed network) or a low-speed network, the types, time, throughput, utilization, and other network features of data transmission vary across different networks.
In terms of Web load characteristics, a critical factor in the evaluation of web servers is the choice of Web load. Although there are many evaluation tools, they all do their homework on the selection load. The main purpose of the study on Web load characteristics is to evaluate the Web server performance, select the most realistic web load evaluation tool to obtain the most realistic web server performance evaluation data, so as to better analyze and obtain the optimization solution.
Web Application Server Optimization Method
When optimizing Web servers, we must adopt targeted optimization solutions based on the actual situation and characteristics of the Web application system. First, based on different network features: in the LAN, reducing m t u (maximum transmission unit) value pairs can avoid data replication and verification, by optimizing the Select system call or executing computing in the socket event processor, the concurrent request management can be optimized, and the system performance can be improved through the continuous connection of http1.1, however, in the WAN environment, there is no major effect, and some are even the opposite.
For example, reducing the MTU for user connection will increase the server processing overhead. Using Network Delay, bandwidth limit, and continuous connection with http1.1 will not significantly affect the server performance in the wide area network. In the wide area network, the wait time for end users' requests depends on the degree of network latency and the connection bandwidth limit. For Wide Area Networks, soft and hardware interruptions occupy a large part in network processing. Therefore, adopting an adaptive Interrupt Processing Mechanism will bring great benefits to the server's response capability; locating the server in the kernel and changing the process-based design to transaction-based can also improve the server performance to varying degrees.
For Web load, in addition to analyzing the characteristics of Web load to better reproduce the actual load during evaluation, we also need to consider the load in the network environment where the Web server is located. People not only require servers to meet normal workload requirements, but also maintain high throughput during peak hours. However, the performance of servers under high loads is often lower than expected.
Server overload is divided into two types: instantaneous overload, that is, temporary and short-term overload of the server, which is mainly caused by the characteristics of the server load. A large number of studies have shown that the network traffic distribution of Web requests is self-similar web development technology, that is, the traffic of Web requests can be significantly changed in a large range. This causes the server to be overloaded for a short period of time, but the duration is usually very short. One is the overload of the server for a long time. This is generally caused by a special event, such as a denial-of-service attack on the server or a "live lock.
The first type of server overload is inevitable, but the second type can be improved by improving the server. Aside from malicious attacks, a careful analysis of the server's processing of information packets shows that the root cause of system performance degradation in the case of overload is the unfair preemption of CPU in the high-priority processing phase.
Therefore, if you limit the CPU usage in the high-priority processing phase, or limit the number of CPUs with high-priority processing, you can reduce or eliminate the phenomenon of live packet lock. The following methods can be used:
I. Use the polling mechanism.
In order to reduce the impact of interruptions on system performance, it is very effective to use the "half-processing" method under normal load conditions. In high load conditions, using this method will still cause a live lock. In this case, the polling mechanism can be used. Although this method causes resource waste and a lower response speed when the load is normal, it is much more effective than the interrupt driving technology when network data frequently reaches the server.
2. Reduce context switching.
This method is effective for performance improvement no matter under what circumstances the server can achieve this by introducing core-level (kerne1-leve1) or hardware-level data streams. Core-level data streams forward data from the source over the system bus without passing the data through the application process. In this process, CPU operations are required because the data is in the memory.
A hardware-level data stream forwards data from the source over the private data bus or does not need to pass the data through the application process even though DMA is forwarded over the system bus. In this way, no user thread intervention is required during data transmission, which reduces the number of data copies and the overhead of context switching.
3. reduce the frequency of Interruption(Mainly for high load situations ).
There are two main methods: Batch interruption and temporary shutdown interruption. Batch interruption can effectively suppress live locks during overload, but it does not fundamentally improve the server performance. When the system shows signs of receiving live locks, you can temporarily disable the interruption to relieve the burden on the system. When the system cache is available again, you can enable the interruption. However, this method may cause packet loss when the received cache is not large enough.
The performance of web servers is a key part of the entire web system. Improving the Performance of web servers is also a topic that people have been paying attention to for a long time. Through the analysis of the working principle and existing optimization methods and technologies of web servers, it is concluded that the improvement of Web server performance should also be analyzed in detail, in a specific application environment, appropriate optimization measures should be taken based on its characteristics.