how the server sends data. The server program writes the data that needs to be sent to the program's memory space; the server program sends a system call to the kernel through the interface of the operating system; The system kernel copies the data in the user state memory space into the kernel buffer, then notifies the NIC to come over and then the CPU turns to other processing; The NIC copies the data into the NIC buffer in the CPU-specified kernel buffer, and the NIC converts the bytes into bits, which is then exported to the network in the form of an electrical signal.
Note: the replication of data within the computer is replicated according to the width of the bus. For example, in a 32-bit operating system, data is copied 32 bits at a time.
Bus is like a 32/64-lane road, the data in the computer is stored in the form of 0/1, each copy of each lane can only go one 0/1, so you can only copy 32 0/1 at a time.
The speed of the data in the network cable
The network transmission medium has the optical cable and the copper cable, in the optical cable the signal transmission speed is 2.3x10^8m/s, in the copper cable transmission speed is 2.0x10^8m/s.
The propagation speed of light is 3.0x10^8m/s, but because the optical cable uses the reflection mechanism to propagate, is not direct, therefore the electric signal actually goes the distance to be much longer than the straight line, therefore transmits the speed in the optical cable only 2.0x10^8m/s.
what is bandwidth. definition of bandwidth
Definition of bandwidth: the rate at which data is sent. Unit of Bandwidth
100Mbps = 100M bit per second
Usually the 100M bandwidth refers to 100M bits per second,
100Mbps = 12.5MBps
Note: the "100M" we normally refer to is 100MB, and the unit of bandwidth is MB, and 1MB = 8Mb. Therefore, the operator said "hundred trillion broadband" is actually "12.5 trillion broadband", hehe.
what affects the speed of data transmission (bandwidth). the speed at which the data is sent is determined by the receiver's receiving speed. In the data link layer, in order to ensure that the data is not lost during the receiving process, the receiver should tell the sender whether the current delivery speed is reasonable. If the receiver is too late to collect, it will tell the sender, let it slow moan. Therefore, the transmission speed (i.e. bandwidth) of the data is determined by the receiver's receiving speed. is related to the degree of parallelism of the propagating medium. The transmission medium can be regarded as a multi-channel road, the data is composed of 0/1, each lane can only hold one 0/1 per unit. As a result, if there are more lanes in the road, 0/1 of each send will increase, thus increasing the speed of transmission (ie, bandwidth).
Why do operators limit bandwidth?
Our server will be connected to the Internet through a switch, the Internet consists of countless routers and hosts, routers are responsible for the storage and forwarding of packets, routers are routed to the destination address, and the data packets are delivered to the target host.
Since a switch often has multiple server connections, the server sends the data that needs to be sent to the switch, which is then sent to the router, which is stored in the router's cache, and then forwarded sequentially by the router. So, if the server sends data too fast and the router is full, the next data is lost, so you need to limit the speed at which the server sends data to the router, which limits the bandwidth of the server. And this restriction is done by the switch of the access server. As can be seen from the above, as long as the switch to control the receiving speed, the server will be able to limit the delivery speed.
what is shared bandwidth. What is exclusive bandwidth.
1. Exclusive Bandwidth
If the export bandwidth of a router is 100Mbps and there are 10 hosts in the same broadcast domain, the switch limits the maximum export bandwidth of each host to 10Mbps, regardless of the maximum export bandwidth of each host in any case 10Mbps. This is the exclusive bandwidth. Exclusive bandwidth is not affected by other hosts in the same broadcast domain, at any time the maximum export bandwidth is 10Mbps.
2. Shared Bandwidth
Assuming that the export bandwidth of a router is still 100Mbps, but operators in order to earn more money, so that the same broadcast domain more than 10 host access, then the average maximum bandwidth per host is less than 10Mbps, at this time, even if the switch still restricts the maximum export bandwidth of each host to 10Mbps, However, when the host has large network communication, there is no guarantee that each host has 10Mbps maximum bandwidth, at this time will compete with each other bandwidth.
To sum up, exclusive 10M bandwidth to ensure that the server's maximum export bandwidth in any case are 10Mbps, will not be affected by other hosts in the same broadcast domain, while the shared 10M bandwidth can only ensure that the other host in the same broadcast domain is idle to achieve the maximum 10Mbps export bandwidth.
what is response time.
Response time is the time from the first 0/1 of the packet leaving the server to the end of the last 0/1 received by the client.
Response Time = sending time + transmission time + processing time sending time: Starting from the first 0/1 of the packet sending, and the time until the last 0/1 is sent.
Send time = packet bit/bandwidth transmission time: the transmission time of the data in the communication line.
Transmission time = transmission distance/transmission speed
(The transfer speed is approximately 2x10^8m/s) processing time: The time at which data is stored and forwarded in each router.
Processing time is more difficult to calculate.
Response time = (packet bit/bandwidth) + (transmission distance/transmission speed) + processing time
Download speed = bytes of data/response time
What is the throughput rate.
Throughput: The number of requests processed within the server unit time.
Unit: REQS/S
The throughput rate is used to measure the server's ability to process requests.
The throughput rate is not high when the request is very small, because the performance of the server is not yet reflected. Then as the request increases, throughput increases, but when the number of concurrent requests rises to a certain point, the throughput does not rise or fall. That critical point is the maximum throughput rate of the server, also known as the maximum rate.
If our site has promotional activities, you can use the above methods to estimate the maximum throughput of the server, so that can determine whether the server can withstand the pressure of sales.
What is the concurrency number. What is the number of concurrent users.
To figure out the difference between concurrent and concurrent users, you first need to understand the HTTP protocol.
The HTTP protocol is an application-layer protocol that itself is connectionless, that is, the client and the server need to disconnect each time they complete the data interaction and re-establish the connection the next time they communicate. However, there is a keep-alive field in the HTTP1.1, which enables the two parties to maintain a certain length of connection after a communication is completed. If the client wants to communicate with the server during that time, then no new connection can be created, just reuse the connection, which will increase the efficiency of the communication and reduce the extra overhead. Concurrency number: The number of times a client requests to the server. Whether or not to extend a connection that has already been created, as long as the client requests the server, even a concurrent number. Concurrent users: The number of TCP connections created. If a browser has a connection that has been created to send 10 requests to the server, then the number of concurrent users is counted.
Note: browsers now support multiple threads and can establish multiple TCP connections with the server, so one user can cause multiple concurrent users. So "concurrent users" and "users" cannot be fully equated, which requires attention.
average request wait time and server average request processing time
Average Request Latency: The amount of time that a user clicks a button to complete a new page load.
average Server request processing time: The server takes out a request from the wait queue to start, to the time required to process the request.
To sum up: average request processing time is to stand in the user's perspective, is used to measure the quality of the user experience indicators.
and the average server request processing time is to measure the performance of the server indicators, in fact, the reciprocal throughput rate.
Note: The average request wait time is not proportional to the average server request processing time.
Average Request Wait time = Request Transmission time + Request Wait time + request processing time
Average server request Processing time = Request Processing time
So, in the case of a very small number of requests, browser sent a request without waiting, directly by the server processing, then the request wait time and the server request processing time is proportional to the relationship; But when the request is unusually large, the request comes at a much faster rate than the server processes the request, so many requests will be squeezed in the waiting queue , even though the server's ability to process requests is very strong (that is, the average server request processing time is very short), but the user's wait time is still very long, at this time the user wait time and the server request processing time is not proportional.
use Apache bench for stress testing
We use the Apache server's Apache Bench (AB) to stress test the site. AB is easy to use, the key can be directly on the server to launch the test, so we can get the transfer time does not include the server processing time. The server's performance can be known through server processing time. 1. Pressure test command
AB-N100-C10 http://www.acmcoder.com/index.php
-n100: Total concurrency-c10: number of concurrent users http://www.acmcoder.com/index.php: pages that need to be tested
2. Analysis of test results
Server software:openresty #服务器软件 server Hostname:www.acmcoder.com #测试的网址 server port:80 #访问的端口 Number document Path:/index.php #测试的网页 document length:162 bytes #HTTP响应信息的正文长度 concurrency level:10 #并发用户数 time taken for tests:1.497209 seconds #测试所花费的时间 Complete requests:100 #总请求数 Failed requests:0 #失败的 Requests (response code non-2XX requests by non-2xx responses Records) Write errors:0 non-2xx responses:100 #HTTP响应头中状态码非2xx的响应个数 total tra
nsferred:32400 bytes #总的响应数据长度, including header and body data for HTTP responses, but does not include request data.
HTML transferred:16200 bytes #HTTP响应中正文数据的长度. Requests/second:66.79 [#/sec] (mean) #吞吐率 time/request:149.721 [MS] (mean) #用户平均请求等待时间 time per request : 14.972 [MS] (mean, across all concurrent requests) #服务器平均请求处理时间 Transfer rate:20.71 [kbytes/sec] Receive D #服务器的数据传输速度 (This data is the server export bandwidth in the extreme case) Connection times (ms) min MEAN[+/-SD] median max connect:40 4 6 4.8 ProcessING:41 5.0 waiting:40 4.9 total:81 92 9.7 92 1
Percentage of the requests served within a certain time (MS) 50% #50% of requests completed in 92 milliseconds 66% 98 75% 99 80% 90%-114 98%-99%-116 100%-116 (Longest request)
How to select the site's measured URL.
A website URL may have many, each URL corresponding processing is also different, a URL of the test results are not representative. Therefore, we need to select a series of Representative URLs, the test results of the weighted average number as a comprehensive performance of the site.