How to configure Nginx server to defend against CC attacks

Source: Internet
Author: User

How to configure Nginx server to defend against CC attacks

Basic Principles of 0x00 CC attacks

CC attacks use a proxy server to send a large number of URL requests that require long computing time, such as database queries. As a result, the server performs a large amount of computing and quickly reaches its processing capability to form DOS. Once the attacker sends a request to the proxy, the connection is automatically disconnected because the proxy does not connect to the target server because the client is disconnected. Therefore, the resource consumption of the attacker is relatively small, and from the perspective of the target server, requests from the proxy are legal. In the past, anti-CC attack methods were used to prevent CC attacks. The previous method was to limit the number of connections of each IP address, which was difficult to implement when the IP address range was wide; second, restrict the access of the proxy. Generally, the proxy will contain the X_FORWARDED_FOR field in the HTTP header, but there are also limitations. Some proxy requests do not contain this field, in addition, some clients do need a proxy to connect to the target server, which denies access from some normal users. HTTP flood attacks are more terrible than http flood attacks. Http flood attacks are hard to prevent. Three reasons for personal analysis: 1. IP addresses from CC attacks are real and scattered; 2. CC attack data packets are normal data packets; 3. CC attack requests, all requests are valid and cannot be rejected. Anti-CC attack ideas anti-CC effectiveness lies in the fact that the attacker does not accept the server response data and actively disconnects the connection after sending the request. Therefore, check whether the connection is CC, the server does not immediately execute the URL request command, but simply returns a page-directed response containing the new URL request address. If the access is normal, the client will actively connect to the redirection page again, which is transparent to users. CC attackers will not reconnect because they do not receive response data, the server does not need to continue the operation. 0x01 verify the simplified version of browser behavior. The community is engaged in welfare and red packets are distributed on the square. While the bad guys send a batch of humanoid robots (without language modules) to get red packets, smart staff need to come up with a way to prevent red packets from being impersonated. So before sending a red packet, the staff will give the recipient a piece of paper with the words "red packet" on it. If the person can read the words on the paper, it will be a person, give the red packet, if you cannot read it out, be conscious. As a result, the robot was seen and returned. Yes, in this metaphor, people are browsers, and robots are attack servers. We can identify them by identifying the cookie feature (reading words on paper. The following is how to write the nginx configuration file. If ($ cookie_say! = "Hbnl") {add_header Set-Cookie "say = hbnl"; rewrite. * "$ scheme: // $ host $ uri" redirect;} Let's take a look at the meanings of these rows. When the cookie's say is empty, for a 302 redirect packet with cookie say set to hbnl, if a visitor can carry the cookie value in the second packet, the visitor can access the website normally. If not, then he will always live in 302. You can also test whether to use a CC attacker or webshells or directly send a curl package for testing. They all live in the 302 world. Of course, this is easy to defend against? Of course it is not that simple. If the enhanced version is carefully written, you will surely find that the configuration file is still flawed. If the attacker sets the cookie to say = hbnl (this can be set on the CC attacker), this defense is essentially a false one. Let's continue to illustrate the problem with the metaphor we just mentioned. After discovering this rule, the bad guys installed a speaker on each robot, repeating the "red envelopes, red envelopes" and receiving red packets again. At this time, the staff's countermeasure was to do so, asking the recipient to show his account book with his own name and read his name, "I am xxx, I used a red envelope ". So a group of robots that only call "red envelopes" were rolled back. Of course, in order to help illustrate the problem, every robot has an account book. The reason why the driver is rushed back is that he does not read his name, although this is a bit absurd, alas. Next, let's take a look at the way the configuration file is written in if ($ cookie_say! = "Hbnl $ remote_addr") {add_header Set-Cookie "say = hbnl $ remote_addr"; rewrite. * "$ scheme: // $ host $ uri" redirect;} the difference between the write method and the preceding method is that the cookie values of requests from different IP addresses are different. For example, if the IP address is 1.2.3.4, the cookie to be set is say = hbnl1.2.3.4. Therefore, attackers cannot bypass this restriction by setting the same cookie (such as a CC attacker. You can continue to use the CC attacker to test the attack. You will find that all the traffic from the CC attacker has entered the 302 world.

 

However, we can also feel that this does not seem to be a 10 thousand plan, because if an attacker studies the website's mechanism, there is always a way to detect and pre-forge the cookie value setting method. Because the differentiated data sources are some of their own information (such as IP addresses and user agents ). Attackers can also take some time to develop attack scripts for websites. In the perfect version, how can we obtain the numbers they cannot calculate based on their own information? I guess you have already guessed it. Use salt to add a hash. For example, md5 ("opencdn $ remote_addr"), although the attacker knows that he can use his own IP address, he cannot know how to use his IP address to calculate the hash, because the hash cannot be reversed. Of course, if you are not at ease, you may add some special characters and then scatter them several times in case of detection by producer 5.com. Unfortunately, by default, nginx cannot hash strings, so we use the nginx_lua module for implementation. Rewrite_by_lua 'local say = ngx. md5 ("opencdn"... ngx. var. remote_addr) if (ngx. var. cookie_say ~ = Say) then ngx. header ["Set-Cookie"] = "say = ".. say return ngx. redirect (ngx. var. scheme .. "://".. ngx. var. host .. ngx. var. uri) end'; through this configuration, attackers cannot calculate the say value in the cookie in advance, so the attack traffic (proxy CC and low-level packet CC) so we can't extricate ourselves in the 302 hell. As you can see, apart from using the md5 function, other logics are exactly the same as those described above. Therefore, if you can, you can install a third-party module to calculate the hash of nginx, which may be more efficient. This configuration can be placed in any location. If your website provides external API functions, we recommend that you do not add this configuration to the API because the API call is not performed by the browser, will be treated as attack traffic. In addition, some weak crawlers will also fall into 302. This requires attention. At the same time, if you think that the set-cookie action may be simulated by an attacker through parsing strings, you can set the cookie through the header, it is completed through the high-end JavaScript code and sent back a file containing doument. cookie =.... Is the attack completely blocked? It can only be said that the low-level attacks have been blocked. If the attacker must add a webkit module to each attacker to parse js and execute set-cookie, in this case, he can also escape the 302 hell. In nginx's view, the attack traffic is indeed the same as the normal browser traffic. So how to defend? The answer will be displayed in the following section. 0x02 request frequency limit I have to say that many anti-CC measures are implemented directly by limiting the request frequency. However, many of them have certain problems. So what are the problems? First, if we use IP addresses to limit the request frequency, it will easily lead to some false positives. For example, if I export only a few IP addresses in one place, and if there are more visitors, the request frequency will easily reach the upper limit, then users in that place will not be able to access your website. So you will say that this problem occurs when I use SESSION. Well, your SESSION opens a door for attackers. Why? After reading the above, you may already know about it, because, like the "red envelope" Speaker, sessions in many languages or frameworks can be forged. Taking PHP as an example, you can see PHPSESSIONID in the cookie in the browser. If this ID is different, the session will be different. If you write a PHPSESSIONID, you will find that, the server also recognizes this ID and initializes a session for this ID. Therefore, the attacker can easily bypass the number of requests in the session by constructing a new SESSIONID after each packet is sent. So how do we limit the request frequency? First, we need a sessionID that attackers cannot write. One way is to use a pool to record the ID given each time, and then query the ID when the request comes. If not, the request is rejected. We do not recommend this method. First, a website already has a session pool, which is undoubtedly a waste. In addition, we also need to traverse and compare queries in the pool, which consumes too much performance. What we want is a stateless sessionID, OK? Yes. Rewrite_by_lua 'local random = ngx. var. cookie_random if (random = nil) then random = math. randdom (999999) end local token = ngx. md5 ("opencdn ".. ngx. var. remote_addr .. random) if (ngx. var. cookie_token ~ = Token) then ngx. header ["Set-Cookie"] = {"token = ".. token, "random = ".. random} return ngx. redirect (ngx. var. scheme .. "://".. ngx. var. host .. ngx. var. uri) end'; do you think you are familiar with it? Yes, this is the configuration of the perfect version in the previous section and a random number, so that users of the same IP address can have different tokens. Similarly, as long as third-party modules of nginx provide the hash and random number functions, this configuration can be completed directly using a pure configuration file without lua. With this token, each visitor has a token that cannot be forged and unique. In this case, request restrictions make sense. With token, we can directly use the limit module instead of whitelist or blacklist. Http {... limit_req_zone $ cookie_token zone = session_limit: 3 m rate = 1r/s;} Then we only need to add limit_req zone = session_limit burst = 5 after the token configuration above; therefore, the two-line configuration allows nginx to solve the request frequency limit at the session Layer. However, it seems that there are still some defects, because attackers can continuously obtain tokens to break through the request frequency limit. If an IP address can be restricted to obtain tokens more frequently, it will be more perfect. Can this be done? Yes. Http {... limit_req_zone $ cookie_token zone = session_limit: 3 m rate = 1r/s;

 

Limit_req_zone $ binary_remote_addr $ uri zone = auth_limit: 3 m rate = 1r/m;} location/{limit_req zone = session_limit burst = 5; rewrite_by_lua 'local random = ngx. var. cookie_random if (random = nil) then return ngx. redirect ("/auth? Url = ".. ngx. var. request_uri) end local token = ngx. md5 ("opencdn ".. ngx. var. remote_addr .. random) if (ngx. var. cookie_token ~ = Token) then return ngx. redirect ("/auth? Url = ".. ngx. var. request_uri) end';} location/auth {limit_req zone = auth_limit burst = 1; if ($ arg_url = "") {return403;} access_by_lua 'local random = math. random (9999) local token = ngx. md5 ("opencdn ".. ngx. var. remote_addr .. random) if (ngx. var. cookie_token ~ = Token) then ngx. header ["Set-Cookie"] = {"token = ".. token, "random = ".. random} return ngx. redirect (ngx. var. arg_url) end';} as you may have guessed, the principle of this configuration file is to separate the original token sending function from an auth page, then use limit to limit the frequency of the auth page. The frequency here is to authorize one token per minute for one IP address. Of course, this quantity can be adjusted based on business needs. Note that I use access_by_lua for the auth part. The reason is that the limit module is executed after the rewrite phase. If it is 302 in the rewrite phase, the limit will be invalid. Therefore, I cannot guarantee that the lua configuration can be implemented using the native configuration file, because I don't know how to use the configuration file to perform a 302 jump after the rewrite phase. I also want to give some advice. Of course, if you are not satisfied with this restriction, you want to implement an IP address. If an IP address is directly blocked after the upper limit is reached several times in a day, you can create an error page with a similar idea, and then jump to the error page without returning 503 after reaching the upper limit. Then, the error page also sets a limit on the number of requests. For example, you can only access the page 100 times a day, when the number of errors exceeds 100 (100 request error pages), the IP address will no longer be able to access the website that day. Therefore, through these configurations, we have achieved a website Access frequency limit. However, this configuration does not completely prevent attacks. It can only be said that it increases the costs of attackers and the attack capability of websites. Of course, the premise is that nginx can hold the traffic and the bandwidth will not be blocked. If your door is blocked and you want to open the door for business, there is no way to do it. Then, after completing the traffic protection, let's take a look at the defense against attacks such as scanners. 0x03 anti-scanning ngx_lua_waf module this is a good waf module, and we will not duplicate the wheel. You can directly use this module for protection. Of course, you can also use the limit module to implement the effect of an IP address or session. 0x04 Summary This article aims to serve as an example. We do not want you to simply copy the configurations in our examples, but hope to meet your own business needs, write configuration files suitable for your site.

 

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.