Cluster: Is a lot of servers to implement a function, such as MySQL, a lot of servers are installed MySQL,
Load balancing: is used to adjust , for example, a lot of users are accessing the data read, but read a server MySQL more, and read the other server MySQL is less, load balancer to the user access A to part B, to prevent a because of excessive traffic caused by the outage or something.
Distributed: A system to separate different services to deploy, user modules, BBS forum module, content module, payment module. When access to the service is very large, the single server can not meet the demand, spread the service to several or even dozens of hundreds of-day computers
Benefits: Each module is only responsible for its own things and the pressure of the request, the developer is only responsible for their own things
Distributed application development simply means that the user interface, console services, database management three levels are deployed in different locations. The user interface is a client-side implementation of the function, the console service is a dedicated server, data management is implemented on a dedicated database server.
Service-oriented framework SOA, MicroServices:
SOA is a service-oriented architecture, and microservices are a way of realizing SOA
The purpose of microservices is to effectively split applications, enabling agile development and deployment .
The system is comprised of different services,
Each service is developed as a standalone business
Each service is deployed separately, running in its own process
1. How the client invokes the service.
API Gateway----Nginx
Provides a unified service portal to make microservices transparent to the front desk
Aggregate back-end services to save traffic and improve performance
Provide security, filtering, flow control and other API management functions
2. How to communicate between services.
Sync-rpc-dubbo
3. So many services, how to find? = = How to determine which server the resource is accessed on.
In a microservices architecture, there are multiple copies of each service in general and load balancing. A service may be offline at any time, or it may respond to temporary access pressures by adding new service nodes. How services perceive each other. How the service is managed. That's the problem with service discovery.
The basic is the distributed management of service registration information through similar technology such as zookeeper. When the service is online, the service provider registers its service information with the ZK (or similar framework) and updates the link information in real time by maintaining a long link through the heartbeat. The service caller, with ZK addressing, finds a service based on a customizable algorithm and can also cache service information locally to improve performance. When the service is offline, ZK notifies the service client.
4. What to do if the service is hung.
5. How the Distributed system is load balanced.
In distributed systems, load balancing is an important part of distributing requests to one or more nodes in the network through load balancing.
Load balancing is divided into hardware load balancing and software load balancing.
Hardware load balancing, as the name implies, to install specialized hardware between server nodes for load balancing work, F5 is one of the best.
Software load Balancing is the allocation of requests through specific load balancing software installed on the server or with a Load balancer module.
Common load-balancing policies:
(i). Polling. Each request from the network is alternately assigned to the internal server, starting from 1 to N and then restarting
(b). Random. Request random to the server
(iii). Minimum response time. The load balancer device issues a probe request (such as ping) to the internal servers, and then decides which server responds to the client's service request based on the fastest response time of the internal servers to the probe request
(iv). Minimum number of connections: A data record is available for each server in which the internal load is required, and the number of connections currently being processed by the server is recorded , and when a new service connection request is made, the current request is assigned to the server with the fewest connections, making the balance more realistic and more balanced. This equalization algorithm is suitable for long-time processing of request services, such as FTP.
Reverse proxy: