Site dynamic downline based on Nginx dyups module

Source: Internet
Author: User

Brief introduction

Today's main discussion, for distributed services, how to smooth the top and bottom line of the site problem.

Distributed services under the distributed service, we will use Nginx to do load balancing, the business site access to a Service site, the Unified Walk Nginx, and then nginx according to a certain polling policy, the request is routed to the backend on a designated server. There is no problem with this architecture, but here are a few questions to consider, 1.   Website on the offline problem: our site is usually updated when the site is directly overwrite the file, and then restart, then this will cause some request interruption, if it is non-core logic that is OK, if it is the core logic, that request interruption, will affect some data consistency, such as funds, transactions, orders and so on. 2. Dynamic addition and subtraction of machines, such as a site to visit a large number of new machines, it is necessary to modify Nginx configuration, and then reload, which will interrupt the connection.  Although the reload is fast, there will be a momentary request interruption. For the first problem, we can in the request amount of time to update, but this in some services stable companies available, for Internet enterprises, may be 2-3 days on a version, and need to be online immediately, if each time to wait until 4 o'clock in the morning to update, the entire development rhythm may be slow.  For the second problem, for predictable traffic, such as a big push, you can keep it up to 3 days ahead of time with less requests. In recent years, with the popularization of SOA and the advent of micro-services, especially the emergence of Dubbo, the concept of service governance has been raised. Service governance is a very ambitious concept, including Service Registration, service Auto-discovery, service routing, service dependency, cluster fault tolerance, service degradation, service monitoring, service approvalAnd so on, of course not every service center must implement these things, the company can according to their actual needs to customize the implementation. Dynamic up-down based on Nginx dyups module based on the above, I plan to implement a tool that first solves the problem of offline and dynamic scaling on the site, i.e., without restarting Nginx, and updating the site without requiring the request to be lost. At the same time with some service governance functions. Service on-line1. When a new service is on-line, the general will apply for several machines in advance, operations will add the server to Nginx, and add the server corresponding to the upstream, under normal circumstances upstream should be configured as the back-end server IP, but not configured here (if allowed, Even this step can be omitted). 2. The service is deployed and started to register its own service information, including IP and ports, at startup. 3. Upon receipt of the request, the registry will perform a health check on the service to ensure that the service provided is free of problems, and the service status is marked Pre-launchState. 4. In the Admin center, you can Pre-launch's service is set to Online, the Service Management center calls Nginx's on-line interface, adds or updates the service IP to upstream, and provides access to the service. Service UpdatesIf we now have a service that needs to be updated, perform the following steps: 1. In the background Management Center, set a service to downline, the service center calls Nginx's downline interface and sets the IP of the specified server to offline. 2. After 1 minutes of waiting, make sure there are no new links coming in, you can start updating the service site. 3. Once the update is complete, manually set the Online, the service center calls Nginx's on-line interface to set the IP of the specified server to live. Of course for mature services, these can be automated, some companies will have some automated publishing tools, integration with automated publishing tools, can be offline, updated and online. during service runDuring the operation of the service, there will be a health detection service for all the sites that provide services for health testing, once a problem is detected, the execution downlineLogic. Until the problem is resolved, the final execution OnlineProcess. dynamic plus and minus machinesDuring the operation of the service, it is possible for some reasons, the service request is high (provided that these requests are legitimate), exceeding the current cluster's load-carrying capacity, when the system detects these conditions, you can dynamically expand the machine, such as the current popular Docker, when launching the container, while launching the application,  When the application starts, it registers its own information with the registry, the registry then synchronizes the information to Nginx, the application can provide access, and the elastic calculation can be achieved on the whole.   Why not implement service dynamic discovery? Here you can see that there is already a service registry in the diagram.  Now that you have a service registry, that allows the business site to connect to the service registry to obtain the real service IP, and then bypass Nginx to connect the service, the reason why this is not done is because: 1. Realize the service dynamic discovery, this need and RPC framework, and need to do the service of soft load, fail to re-connect, limit the flow, etc., the whole project design has risen to a complexity, considering some projects have not used RPC, and do not want to have too much intrusion on the original project, so here do not implement.     However, it does not mean that there is no such function, the load of services, the failure of re-connection, current limit, in fact, these functions in Nginx also have, can be used directly, so there is no need to re-development. 2. Realize the service dynamic discovery, get to the real service IP, and then directly connected, these are generally in the traffic is particularly large, Nginx appeared on the short board when used, but the actual situation, generally rarely exhaust nginx performance, even if there, can also be achieved through the ngxin level expansion,  So it's still using Nginx as load balancer. Here is the key point for this project: 1. Registration and health testing of services there is no technical difficulty, there is no explanation here. 2. On the operation of Nginx up and down line, here is a difficult point, because Nginx itself does not provide these API, need to openresty and with some third-party extension to achieve. The main use of the two expansion modules here: Ngx_http_dyups_module Lua-upstream-nginx-module Ngx_http_dyups_module(https://github.com/yzprofile/ngx_http_dyups_module) provides a coarse-grained upstream management method that can be added to and removed from the entire upstream. Lua-upstream-nginx-module(Https://github.com/openresty/lua-upstream-nginx-module), provides fine-grained management of a service IP that can be managed, providing a Set_peer_down method, You can make an upper and lower line for an IP in upstream. 3. You can also use Ngx_dynamic_upstream(Https://github.com/cubicdaiya/ngx_dynamic_upstream) These plugins have one thing in common, which is to dynamically modify the Nginx configuration without restarting the nginx. PostScript 1. Finally, I would like to ask everyone to discuss how your company is up and down, whether it is direct coverage, or there are other strategies. Welcome to discuss in the comment area.

Site dynamic downline based on Nginx dyups module

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.