Considerations and testing methods for DDOS Security Products in the Internet cloud ecosystem (I)
The three elements of DDOS attack security are "confidentiality", "integrity", and "availability". DOS (Denial of Service) targets "availability" of services ". This attack method exploits the network service functional defects of the target system or directly consumes system resources (mainly network resources), making the target system unable to provide normal services.
DDOS attack (Distributed Denial of Service) refers to the attacker operating multiple clients (called bots or bots ), together, Alibaba Cloud launches DDoS attacks (attacks at the same time or according to certain policies) against one or more targets to improve the effect of denial-of-service attacks.
In a sense, there is no "availability". What about "confidentiality" and "integrity "? Network services without DDOS defense may disappear from the network at any time. DDOS attacks are currently one of the most powerful and difficult to defend against. Compared with pure technical attack methods, DDOS attacks are more like rogue or bandit actions, that is, so-called rogue martial arts.
DDOS attack principles and common attack principles most of the data in the network is transmitted using TCP/IP protocol (including other such as UDP), these packets must comply with strict protocol standards, normally, packets meeting these protocols are normal data for the target service and network equipment, especially data that contains business logic, which is harmless, just like the ticket reviewer at the station, if the ticket provided by the passenger is legal and identifiable, and all the passengers are normal passengers, the Ticket inspection process is smooth and safe. If the ticket provided by the passenger is illegal or cannot be judged as legal, if there are 9 station tickets (or even idle passengers) among the 10 passengers, it is obvious that most of the energy consumed by the ticket reviewer is the identification of illegal tickets or abnormal passengers, this affects the Ticket inspection process for Normal passengers. At the same time, if a large number of illegal passengers are stranded on the train, Normal passengers will not be able to enter the train. Portal resources and service resources are the most blocking locations and targets.
Similar to the ticket reviewer's workflow: 1. excessive abnormal data packets (useless data packets) may overload network devices or servers. using some defects of data packets or protocols, manual creation of some unfinished malformed packets may also make the active server of the network device unable to process normally, resulting in DOS; 3. although data packets are normal, they do not conform to the normal business logic of the target service. Massive transmission of such packets may also cause DOS.
In general, there are two types of DDOS Attacks:
Pay-as-you-go: This type of DDOS initiates massive data attacks, including attacks against normal packets and abnormal packets, which overloads network devices and bandwidth, consumes resources, and blocks the IDC entry; most of these attacks are based on upper-layer protocols, which are closer to the business itself and have no strict boundaries. defense against this "traffic-type DDOS" makes ISP and ICP facing great challenges; typical attacks, such as UDP Flood, have a smooth track: compared with other attacks, these attacks are relatively clever and belong to technical attacks. They often take advantage of some defects of protocol servers or software, using only a few packages can continuously occupy limited resources, making Lili's target services unable to process normal data and temporarily disappear from the Internet; the well-known slow connection attack Slowloris in this "resource-based" attack;
However, there are few single types of attacks, hybrid attacks, and combinations of normal traffic and abnormal traffic in the actual network environment;
DDOS attacks are commonly used as follows:
Attackers often obtain a large number of bots (BOTS) from various corners of the Internet, and use the controller to operate these bots to initiate hybrid attacks to target services, such as syn flood or dns query flood, even direct traffic from other websites to the target server. The so-called high-performance architecture means nothing in the industry to defend against all types of DDOS attacks while ensuring services, in more cases, we are making a compromise between the two.
However, the core purpose of DDOS will never change, that is, "Unlimited abuse of limited resources", including direct abuse and indirect abuse, which have achieved the goal of undermining "availability.
Defense against common attacks and protection against SYN Flood
Syn flood is the most classic DDOS attack. It uses the defects in the three-way handshake design of the TCP/IP protocol and has been active for decades, syn flood is rampant on the Internet. The reason why Syn flood is too powerful is that it targets TCP/IP protocol defects. It is almost impossible to fix or reconstruct the huge Internet infrastructure.
The three-way handshake process is as follows:
The Client sends a synchronous syn packet to the server, including the client port number and the initial serial number x. After receiving the client packet, the Server sends the syn + ack packet to the client, it includes the confirmation number x + 1 and the initial ack number y. The Client sends the ack message to the server, including the confirmation number y + 1 and the serial number x + 1;
To ensure the reliability of TCP connections, the three-way handshake performs some exception handling during the third handshake, including:
Do 3 ~ Five retries, waiting for the client IP address to respond; about 30 s ~ Wait for a wait list traversal within seconds. If the Client IP address response is not received, the connection will be abandoned. The second handshake will reserve some resources to store the information after the server sends syn + ack to prepare for the third handshake;
If the connection is forged, the server allocates resources and time to wait for the third handshake to ensure the reliability of the connection. When a large number of such forged requests are initiated, the server needs a large amount of resources to process such connections that have not completed three handshakes, and keeps retrying the third handshake. The result is that no resources can process normal request connections, wait for the queue to be filled with such malicious packets, and thus cannot process normal connection requests, resulting in a denial of service.
The general idea of defending against syn flood is to make up for the defect of three-way handshakes, reduce the pressure on server resources, increase the waiting queue, and reduce the number of retries.
Sync ookies can relieve the pressure on server resources. It does not save the status information to wait for confirmation by the client, the random number based on the time seed replaces the normal random number as the initial syn serial number y. When the third handshake is received, the cookie verification algorithm is used to determine whether the syn + ack serial number is matched, the three handshakes are finally completed;
Net. ipv4.tcp _ max_syn_backlog parameter (/etc/sysctl. conf, Kernel Parameter), the server memory can be used in exchange for a longer waiting queue, so that the attack cannot easily occupy the waiting queue and cause a denial of service;
Net. ipv4.tcp _ synack_retries reduces the number of syn + ack retries of the server's second handshake, so as not to occupy resources for a long time;
To improve the Protocol's own resistance, of course, additional server resources are required. Therefore, we also need to consider the compromise between server configuration performance and server service.
In addition to improving the capabilities of the protocol, the defense against syn flood can also identify abnormal behaviors, such as the first packet discard and blacklist and whitelist ideas;
The idea of discarding the first packet is to discard the client's first packet. Wait for the client's syn re-transmission packet, and add the IP addresses with retransmission behavior to the whitelist, A connection without retransmission is considered as an attack. The process is as follows:
The packet loss retry scheme plays a significant defense role on syn flood. However, this scheme is not suitable for processing on the server-it has a negative impact on business processing, such as an increase in the impact time; one idea is to separate the first packet discard process from the business processing process and use dedicated devices for processing-almost all network cleaning devices have this function, packet Loss retry has a more Optimized Solution on this cleaning device. Generally, this solution for processing the first packet discarded on other devices is called TCP Proxy.
However, in fact, discarding the first packet is only a function in the narrow sense of TCP Proxy. Almost all the processes that are out of the business can be placed on the TCP Proxy side, for example, the syn cookie and the first packet drop-off method mentioned above simulate the server on the cleaning device to perform three-way handshake verification, maintain its blacklist and whitelist, and finally forward the server's business data. Mature cleaning devices can identify more exceptions and malformed TCP packets, and even simulate responses to identify client attacks and malicious behaviors, and clean and filter these traffic.
DNS Query Flood attack Principle
DNS Query Flood can be seen as an upgraded version of UDP flood. UDP flood is a "traffic-type" DOS attack. The most common situation is to use a large number of UDP packets to attack backbone networks and network devices. The difficulty in defense against UDP Flood lies in its connectionless status and various protocols. However, there are few IP addresses that provide UDP services, and it is relatively easy to filter forged IP addresses, pure UDP traffic attacks gradually decrease, and most network transmission methods are not UDP. However, streaming media services and DNS services based on the UPD protocol are still the key targets of UDP Flood attacks.
The biggest difference between Query Flood and UDP Flood is that Query Flood is an application layer attack while UDP Flood is a protocol layer attack. The higher the protocol level, the more associated with the business, the more difficult the defense. Query Flood actually executes a real query, a business action. However, if multiple bots simultaneously initiate such a massive domain name Query request, the server cannot return results for normal query requests, resulting in a denial of service. To increase attack randomness, Query Flood not only needs to forge IP addresses and ports at the protocol layer like UDP flood, but also needs to forge parameters and domain names at the application layer. The purpose of randomness is to bypass the filtering and caching of dns servers.
The defense of Query Flood can be considered from the following three aspects:
The minimum permission principle is adopted for domain name authorization. All domain names in non-white lists are discarded to improve processing performance;
Force TCP resend
Similar to the first packet discard method, the first DNS packet is directly discarded, and the client is forced to Query the domain name in TCP mode;
Increase the DNS Cache and domain name Request hit rate to avoid excessive performance consumption;
Slowloris attack Principle
This is an attack method that is contrary to most attacks. It is famous for its slowness. In some cases, such attacks are hard to be discovered. It uses some characteristics of web server to achieve the attack purpose, this long-history attack seems to be effective in some cases.
Slowloris attacks the maximum number of concurrent web server containers. No matter which web container is used, there is an upper limit on the number of concurrent connections. After the upper limit is reached, the web server cannot accept new requests, that is:
When the web server receives a new http request, it opens a new connection to process the request, and closes the connection after the processing is complete. If the connection continues to process the connection status, for new http requests, open a new connection. If all the connections are in the connection status, the web server processes any new requests.
Slowloris uses the http feature to achieve this goal: Because the http request ends with \ r \ n to mark headers, if the web server only receives \ r \ n, in this case, the http headers part is not ended, and the connection is not put, waiting for the subsequent content. In actual attacks, the connections in http headers is usually set to keep-alive, so that the web server will keep the TCP connection open, next, send some key-value pairs to the web server intermittently, so that the connection can be continuously enabled. Of course, you can also set the content-length to a large value, and then periodically post data to the web server, that is, http post ddos.
This connection can be easily created through multiple threads or bots. Without a large amount of traffic, the web server connection will soon reach the upper limit, instead of processing new http requests.
Defense Slowloris also needs to be considered from the root cause of Slowloris: 1. TCP connection time 2. transmission Time of http headers 3. the number of packets for each TCP connection, so the Slowloris defense method is as follows:
Control and collect statistics on the duration of TCP connections, and blacklist TCP requests for long connections. Set the maximum transmission time of http headers. If the maximum transmission time is exceeded, the connection is closed and the request is dropped to the black hole ticket; count the number of packets in each TCP connection time. Too few packets are abnormal. HTTP Flood (CC) attack Principle
Compared with the preceding typical DDOS attacks, HTTP Flood attacks are the most troublesome, currently, all anti-ddos products do not effectively defend against attacks. This is because HTTP Flood is not a network-layer attack, but an application-layer attack. It has another name, Challenge Collapsar, which is a provocative claim to Challenge the anti-DDOS device called Collapsar (black hole) of the green alliance at that time.
Network-layer attacks all have notable characteristics, but application-layer attacks fully simulate user requests. Similar to various search engines and crawlers, these attacks have no strict boundaries with normal services, difficult to identify. The principle of CC is similar.
The performance of Web services is affected by related resources. The direct resources include CPU, MEM, DISK, and NET (four metric indicators of performance testing). These resources are queried by the database, network bandwidth, file size, memory allocation, algorithms, and other hardware and software conditions.
Some transactions and pages consume a large amount of resources. For example, if a web application involves paging and table sharding, it is obvious that the page parameters are too large and frequent page turning will occupy a large amount of web server resources, especially when high concurrency and frequent calls are made, transactions like this have become the earliest target of CC attacks. Because most of the current attacks are hybrid and mixed in normal services, frequent operations with simulated user behavior can be considered as CC attacks.
(You also need to guess and judge which transaction resources are seriously consumed ). But in general, CC attacks at the application layer are characterized by blurred boundaries with business applications. For example, a variety of Ticket scalping software accesses 12306, which is a CC attack to some extent. In addition, some websites or stores have made activities and publicity, resulting in sudden excessive traffic to access a certain day, if the web server cannot support such abrupt traffic, it is also a type of CC behavior.
Because CC attacks target the backend services of web applications, in addition to DoS attacks, they also directly affect the functions and performance of web applications, such as affecting the web impact time and database services, affects disk read/write and other operations, which may cause abnormal functions and performance.
In addition, CC attacks are easier to initiate than DDOS attacks mentioned above. Because the traffic it initiates belongs to normal business traffic and is difficult to identify, a large number of bots are not required in many cases. There are rich HTTP proxies on the Internet, you can use these HTTP proxies to initiate HTTP attacks directly to the target, or even intrude into a large-traffic website, and then forward the traffic directed to the website to the target location.
Although there are no effective defense methods for CC attacks, some methods still have a certain defensive effect on CC attacks.
Restrict Access frequency: You can use IP addresses and cookies to locate the client and determine the access frequency for a period of time. If the Access frequency is too frequent, You can temporarily add it to the blacklist or directly return an error page; the Access Frequency logic can also be placed on the cleaning device to directly add a blacklist to clients with High Access frequency. This method is simple, but there are two shortcomings: 1. The attack from the proxy server cannot be determined; 2. The normal access is killed by mistake.
CDN cache: the cache can mitigate CC attacks. For most requests, the cache results can be directly returned. A single server is applicable, and Internet services are also applicable. For large Internet structures, generally, CDN nodes are used for content caching.
Human-Machine Identification: The most common human-machine identification method is Verification Code. Its fundamental purpose is to intercept automatic replay of requests. However, verification codes also affect user experience while intercepting automatic requests, in addition, if the verification code is not "random" enough, you can bypass it through the rainbow table. User-agent is also a method to judge, but User-agent can also be modified, this automatic request will become invalid. parsing JS (or flash) using the client is also a way of judging: a simulated request cannot parse JS like a client (browser) and send a piece of JS to the client, when a normal jump is received, the system processes the jump and adds it to the whitelist.
Web containers: web containers also provide some defense capabilities. You can configure parameters to handle trade-offs based on business needs, such as timeout, maxclients, and alivetimtout parameters.
DDOS attack features and defense methods in the Internet cloud Ecosystem
With the development of the Internet, more and more people are enjoying the convenience brought by network technology. As a result, the security problems faced by the Internet are becoming increasingly serious, defense methods need to be constantly researched and improved to cope with the explosive growth of attacks.
The rapid rise of the cloud ecosystem in recent years also needs to deal with a variety of security attacks. Like the complexity of the cloud ecosystem, DDOS attacks at this time are no longer purely single DDOS attacks.
Supplier networks, backbone networks, IDC portals, clusters, CDN, Server Load balancer, hosts, services, and so on are deployed together to support a huge cloud ecosystem network.
In such a multi-level and complex network environment, any problem may affect the business. Some attacks are no longer based on a single layer, it is based on vulnerabilities or defects in a combination of multiple levels. Therefore, long-chain systems expand the scope of DDOS attacks, and more components and services are migrated to the cloud. Any component may cause service line faults.
In addition, because the services of different users are on the same physical machine, any user's services may be affected by attacks.
There are no new elements and only new combinations. Advanced attacks are combined by multiple layers and methods based on the target's attack range and environment. This type of hybrid attack includes deception, targeting, and obfuscation.
Fraudulent: for example, you can add a syn + ack packet for verification during syn flood attacks to confuse the syn cookie detection of the cleaning device, so as to enhance the attack effect and the pressure on the cleaning device. Such a fraudulent mix often requires a certain understanding of the cleaning and judgment policies of the network. It is a battle of attack and defense. Any leakage or speculation of policies will cause serious consequences.
Targeted: Internet hybrid attacks are highly targeted. For example, CC attacks do not simulate users' browser operation requests, but directly simulate web api calls. The normal services of such calls are also automatic, adding this type of business to CC makes the attack and normal business boundaries more blurred, and cleaning equipment is more difficult to filter out.
Obfuscation: the simplest hybrid attack is the direct combination of several DDOS attacks, such as syn flood, slowloris, and CC, which will increase the pressure on the cleaning equipment, in addition, if the service is under attack, it takes time for the staff to determine which attack caused the attack.
Application Layer attacks
As more and more component services and applications are migrated to the cloud, a complex and massive network environment, various applications at the top layer have become the targets of various DDOS attacks, in the cloud environment, most devices and environments under the application layer are used to defend against attacks concentrated at the application layer. The relationship between infrastructure and applications is support and protection, applications cannot be directly controlled. How to defend against DDOS attacks at the application layer is currently the biggest challenge facing DDOS defense in the cloud ecosystem.
Selection of Broilers
The cloud environment not only provides Elastic Computing, CDN, storage and other services, but also provides virtual hosts, VIP and smooth bandwidth, such an environment provides a better entrepreneurial environment for small and medium-sized companies, as well as resources for hackers.
For example, scalpers, a special profession in China, are expected to purchase tickets during the Spring Festival rush hours each year. They are one of the cloud service customers-purchase a large number of hosts by time and bandwidth, directly deploy their own images, and then start to use these hosts for ticket collection, hundreds of hosts are used to kill train tickets in a cloud environment. The host purchase duration (for example, one hour) is enough for scalpers to brush train tickets. For scalpers, this cost is very low.
However, for 12306, this is undoubtedly a nightmare. Although it is a normal business access, in a sense it is an attack, and it directly hits the core business-not only normal users cannot log on and operate, even though the tickets are sold out, what about the ticket purchasing website?
In the same way, what if a scalper is a hacker? What if these hosts are not purchased but infiltrated? These hosts have become good-performing computers and provide convenience for hackers.
Start-ups in the cloud ecosystem are suffering from hackers and attacks from their peers. On the one hand, they have to spend money to purchase various cloud services and security services. While sighing, we also need to justify our colleagues who strive for the security defense product line. Compared with other applications, the security system often requires more manpower for maintenance.
Security Defense is not so much a technical system as a human O & M system. Because security defense itself is a semi-automated system, seemingly high-end security products are actually the accumulation of various backend Attack and Defense battles. DDOS defense cannot be separated from the emergency response and handling of various departments, such as operations, O & M, development, testing, customer service, and network engineering, under normal circumstances, the operation and network workers also need to keep the phone and network available, and even when walking on the side of the road suddenly need to turn on the computer to manually handle an attack.
Therefore, choosing the attack time is also a factor in the success or failure of the attack. Staff are working in the morning and evening. Application Services are handled differently during the day and evening, and network congestion is also different. I have worked in an Internet company before and banned Weibo related to activities during group building, because competitors will make their own arrangements during group building, such as releasing new versions and attacks.
Security attack and defense is a continuous process and absolutely result-oriented. For attackers, which technology or vulnerability is used is not important, and the attack result is important, all factors that affect the results must be considered.
DDOS Defense in the Internet and cloud environments is still a challenge, because the defense target is no longer an application or service, but an ecosystem.
Defense in depth
Defense in depth is a white hat principle. DDOS defense in the cloud ecosystem is also applicable. Each layer must have its own security protection, independent from other layers of protection, and has its own alarms and tracking.
Only by planning different levels of security defense can we build an overall defense system. Different layers have different features, and different security measures are implemented. Mutual cooperation can ensure the overall security. In addition, some attacks need to be protected at some levels, for example, defense such as syn flood and slowloris can be put into the cleaning device, while the firewall can defend the bottom layer of the network. Host Security is responsible for application security of VM instances, internal firewalls cut off the connection to the controller when the host is controlled as a slave machine, while cleaning and filtering are also required on the SLB and CDN layers.
As the cloud ecosystem matures, the demand for cloud services is also increasing, including horizontal and vertical scaling, such as resizing and minimizing environment deployment.
Here, expansion refers to the expansion of a new service cluster when a server reaches the upper limit or new functions, and the minimal deployment refers to the deployment of all or part of the cloud ecosystem in a LAN, for example, all cloud vendors are currently pushing Private clouds.
The security defense of cloud products is the default attribute of the cloud ecosystem. When the cloud environment is scaled up or minimized, security defense must also have the portability of "minimal deployment: on the one hand, virtual and physical security devices can be deployed anywhere on the network; on the other hand, the entire security defense system can be moved as the cloud moves and copied to any place where the cloud is deployed.
The engine mentioned here refers to WAF, IPS, IDS, and DLP. The Internet cloud ecosystem requires these engines to be more powerful and provide Engine Services for nodes at all levels.
On the one hand, the engine must be visible at the minimum granularity, so that it can be conveniently deployed at other layers or directly provide services. On the other hand, the cloud ecosystem should also defend against security and attack problems between layers.
To ensure the minimum visibility of the engine granularity, it is necessary for an excellent cloud environment.
Business integration Security
As mentioned above, DDOS attacks at the application layer will directly affect the business layer. The business security in the cloud ecosystem is not just DDOS attacks.
The attack also includes other technical attacks (injection, phishing, brute force cracking, etc.) and business attacks (fraud, etc.). Therefore, in the security defense architecture, business security itself must also be integrated into the system.
Business security covers a wide range of fields, with more network structures. Different businesses have different security requirements. Therefore, business security integration, more is provided in the form of services.
Similar to business integration security, the security defense system of the cloud system needs to be gradually serviced and output externally.
First of all, different users have different business needs and security business needs-the gaming industry has a much higher defense demand for DDOS attacks than normal website businesses, so users need to make a choice in terms of defense strength;
On the other hand, some users are not satisfied with the default security engine and devices provided by cloud services, and want to select other engines and devices.
Based on these considerations, the cloud ecosystem also provides security services while providing external services, such as interfaces or related page operations.
In addition to the overall strategy of the defense system, some architecture processing except the security technology can also be used in combination with defense. The cloud ecosystem is a whole and relies on the cooperation of different layers. Cloud security is also a whole, different layers are required. While enjoying the protection of the security defense system and cooperating with the construction of the security system, it is the architecture needs of each layer in the cloud system.
Backup | cache | CDN
All large-scale Internet Services Support business backup and cache services, which play an important role in security defense: Backup can accelerate post-warning processing and reduce losses, caching can mitigate DDOS attacks. Because of this, the number of CDN nodes can be used to measure the DDOS defense capability of a system.
In addition to the number of CDN nodes, each node's VIP allocation needs to be handled based on actual attacks, such as for a certain number of VIPs, based on the severity of attacks on the CDN node and the frequency of attacks on the VIP, the VIP can be re-allocated. The anti-DDOS capability is different for which CDN nodes are allocated with the VIP priority.
In addition, for Distributed CDN, the results of dynamic Round Robin and hash are not the same when it is attacked by large traffic.
Server Load balancer
In addition to Server Load balancer and VIP, Server Load balancer also provides monitoring data at the Server Load balancer layer, such as traffic-related data (bps, pps, and qps) and connection-related data (qps: newconns and concurconns of VIP and TCP connections; business-related: service processing capacity, failure processing, etc.
Although large Internet service systems have dedicated systems for traffic analysis after split, these analyses are common and cannot be used for a specific layer of analysis. In addition, business-related analysis, you can only perform specific processing on each layer.
In addition to policy and architecture considerations, some common security principles and techniques are also frequently used in the cloud security defense system. Here we will only give a brief introduction.
The minimum permission principle is one of the basic principles of security. Only necessary permissions must be granted. Unnecessary permissions cannot be authorized. In the cloud environment, the business requirements and permissions of the current layer must be sorted out within each layer to ensure that the minimum permissions are used for calls at other levels. operations with other permissions are untrusted.
Blacklist and whitelist are also one of the basic principles of security. Compared with the trust of "minimum permission" on Multiple permissions, the trust of blacklist and whitelist is based on absolute mutual exclusion, non-White: black or non-black: white. In many cases, the security defense process is the process of condition judgment. In this process, black and white lists play an absolutely important role.
However, in a multi-level defense environment such as the cloud ecosystem, you need to pay attention to the following issues:
If the blacklist and whitelist are dynamic, ensure that the blacklist and whitelist is maintained in a single logic. Do not share the blacklist and whitelist. Each layer uses its own blacklist and whitelist;
Because different services and hierarchies focus on different policies and rules, the use of black/white lists in different layers not only facilitates maintenance, but also reduces the number of failures.
Previously, we mentioned in the CC defense that the cleaning device uses IP addresses and cookies to locate the client's computing Access frequency. However, for the application layer, the Access frequency is not absolute, so it is easy to get killed by mistake.
One way is to pass the Access Frequency judgment to each layer for processing. Such judgment results are more targeted and their respective judgment algorithms are also different, and the results will be better.
A mature method is to monitor the access frequency in the bypass. When the access connection exceeds the predefined threshold, switch the cleaning device to clean the traffic on the main road, the cleaning result is used to determine the next step.
In CC, we mentioned Man-Machine identification and typical man-machine identification, namely, verification code. The fundamental purpose of man-machine identification is to determine whether a request is re-sent by a machine. However, the recognition process also faces problems.
The disadvantage of the verification code has been mentioned before. If the verification code is not random enough, it can be bypassed. It is difficult to compromise the user experience and recognition process.
I have previously participated in the development of a ticket farming software and have undergone various changes in the verification code 12306, in the end, 12306 still does not have a good solution to filter out all the ticket swiping operations through the verification code-sometimes the automated image recognition capability is better than the human eye, so it is clear that the verification code in this case did not reach the original design intention.
Human-Machine identification faces more than fuzzy boundaries, such as webapi calls. These interfaces are originally initiated by the program. If the program continuously calls them cyclically, is it also an attack?
Whether it is verification code, more complex Access frequency calculation, or other human-machine identification methods, it requires more complex computing to improve the recognition accuracy. However, no matter how good an algorithm is, when performing traffic analysis, both speed and accuracy cannot be guaranteed. With the development of machine learning and offline data processing, we believe that there will be better solutions for human-machine identification.
Communication and collaboration
As mentioned earlier, DDOS Defense is always a battle of attack and defense. In the semi-automated accumulation process, to achieve better defensive effects, good monitoring, organization, and process are required. The seemingly simple attack processing involves the accumulation and cooperation of colleagues in various processes.