19 tips: Linux Server Load balancer

Source: Internet
Author: User
Tags nginx load balancing

As a Linux/Unix system engineer, I have been involved in external projects over the past few years. I have worked on the architectures of many small and medium-sized websites. F5, LVS, and nginx have many contacts, I want to explain to you what Server Load balancer is and what is a Linux cluster in a simple and easy-to-understand tone, to help you get out of this misunderstanding and understand them in the true sense, for project construction cases, see my similar articles on network.51cto.com.

I. At present, the website architecture is generally divided into Load Balancing layer, web layer and database layer. In fact, I usually add another layer, namely, the file server layer, because with the increasing PV of the website, the pressure on file servers is also increasing. But with the increasing maturity of moosefs, drdb + heartbeat + NFS, this problem is also not big. the frontend Server Load balancer layer of a website is called Director, which distributes requests. The most common layer is round robin.

2. F5 implements Load Balancing through hardware. It is mostly used in CDN systems and used for load balancing of squid reverse acceleration clusters. It is a professional Hardware load balancing device, it is especially suitable for scenarios with high requirements for new connections and concurrent connections per second. LVS and nginx are implemented through software, but the stability is also quite powerful, in the case of high concurrency, it is also quite good.

3. nginx is less dependent on the network. Theoretically, nginx can be connected if the ping succeeds and the webpage access is normal. nginx can also distinguish between Intranet and Internet, if a node has both Intranet and Internet, it is equivalent to a single machine with a backup line; LVS is dependent on the network environment. Currently, the server is in the same network segment and LVS uses direct traffic distribution, the effect is guaranteed.

Iv. Currently, mature Server Load balancer High Availability technologies include LVS + keepalived and nginx + keepalived. In the past, nginx did not have a mature dual-Machine backup solution, but it can be achieved through shell script monitoring, if you are interested, please refer to the project implementation solution on 51cto. In addition, if you consider the high availability of nginx load balancing, you can also achieve it through DNS round robin, if you are interested, refer to the relevant articles of the banquet.

5. A cluster refers to a web cluster or Tomcat cluster behind the Server Load balancer, but the current cluster refers to the overall system architecture, including the Server Load balancer and backend application server clusters, many people now like to refer to Linux clusters as LVS, but I think it should be strictly separated.

6. High Availability of Server Load balancer refers to implementing the ha of Server Load balancer. After one Server Load balancer is broken, another server can be switched within <1 s, the most common software is keepalived and heatbeat. The Load balancer solutions in mature production environments include LVS + keepalived and nginx + keepalived.

7. LVS has many advantages: ① strong load resistance; ② stable operation (because of mature ha solutions); ③ No traffic; ④ basically supports all applications, based on the above advantages, LVS has a lot of fans, but there is no absolute thing in the world, LVS is too dependent on the network, in the network environment is relatively complex application scenarios, I have to give up on nginx.

8. nginx has little dependence on the network, and its strong and flexible Regular Expressions and strong features attract many people, and the configuration is quite convenient and simple, in the implementation of small and medium-sized projects, I basically consider it. Of course, if the funds are sufficient, F5 is the best choice.

9. You can use F5, LVS, or nginx in a large website architecture. Select either of the two or three options. If you do not select F5 for budget reasons, the frontend of a website should be LVS, that is, the DNS should be LVS. The advantages of LVS make it very suitable for this task. Important IP addresses are recommended to be hosted by LVS, such as the IP address of the database and the IP address of the WebService server. These IP addresses will become more and more usable over time. If the IP address is changed, the fault will occur one after another. Therefore, it is most secure to hand over these important IP addresses to LVS for hosting.

10. The VIP address is a virtual IP address of keepalived. It is an external public IP address and also an IP address pointed to by DNS. Therefore, when designing the website architecture, you must apply for an external IP address from your IDC.

11. during actual project implementation, we found that LVS and nginx have excellent support for HTTPS, especially LVS, which is easier to process.

12. During LVS + keepalived and nginx + keepalived troubleshooting, both of them are convenient. If a system fault or server fault occurs, DNS can be directed to a real Web server at their backend to achieve short-term Fault Handling. After all, the PV of advertising websites and e-commerce websites is money, this is why Server Load balancer is designed to be highly available. For large ad sites, I suggest you go directly to the CDN system.

13. Now Linux clusters are all mythical. In fact, this is not much complicated. The key is to choose which one of your application scenarios is applicable. nginx, LVS, and F5 are not mythical, which is convenient and applicable.

14. In addition, session sharing is also an old topic. nginx can use ip_hash to solve session problems, both F5 and LVS have a session persistence mechanism to solve this problem. In addition, they can write sessions into the database. This is also a good solution to session sharing, of course, this will also increase the burden on the database, which depends on the choice of the System Architect.

15. I currently maintain approximately 1000 concurrent e-commerce websites, about 100 for securities and information websites, and about 3000 for large online advertisements, I feel that the concurrency at the web layer is becoming more and more not a problem. Due to the strength of the server and the high concurrency of nginx web, the concurrency at the web layer is not a major problem. On the contrary, the pressure on the file server layer and database layer is getting bigger and bigger, and single NFS cannot be competent for the current job. Now the good solution is moosefs and drdb + heartbeat + NFS; while my favorite MySQL server, mature application solutions are still master-slave. If the pressure is too high, I have to select the Oracle RAC dual-host solution.

16. Now, influenced by the banquet, everyone is playing nginx (especially for Web). In fact, when the server performance is excellent and the memory is sufficient, apache's anti-concurrency capability is not weak, and the bottleneck of the entire website should be in the database; I suggest you know Apache and nginx in two ways, and use nginx as the load balancing at the front end, the backend uses Apache for web, and the effect is quite good.

17. Heartbeat's split-brain problem is not as serious as you think. You can consider using it in an online environment. drdb + heartbeat is a mature application and we recommend that you master it. I have used this combination to replace EMC shared storage in many occasions. After all, 0.3 million of the price is not acceptable to every customer.

18. No matter how mature the design scheme is, we recommend that you configure the Nagios monitoring machine to monitor our servers in real time. Email and SMS Alarms can be enabled. After all, mobile phones can be carried with you; if you have the necessary conditions, you can purchase a specialized commercial scanning website service. It will scan your website every minute. If you find that no alive is found, it will send a warning message to your email or contact you directly by phone.

19. At least for website security issues, I suggest using a hardware firewall. We recommend that you use the three-tier Huawei cloud firewall + Tiantai web firewall. DDoS security protection must be in place; the Linux server's iptables and SELinux can both be disabled. Of course, the fewer ports, the better.

Note: test site response time is to use http://tools.pingdom.com, found on the LVS + keepalived, nginx + keepalived does not affect the speed, this point we do not have to worry about it, nginx is now becoming increasingly mature in reverse acceleration.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.