Discover sticky connections load balancing, include the articles, news, trends, analysis and practical advice about sticky connections load balancing on alibabacloud.com
1. OverviewThe first two times in this article, we have been talking about the construction of Nginx + keepalived. At the beginning of this article, we are going to honor the promise of the previous article, the following two articles we will introduce the Nginx + keepalived and LVS + keepalived to build a highly available load layer system. If you do not know about Nginx and LVS, see my previous two articles, "Architecture Design:
This article is the first article of load Balancing, which introduces load balancing algorithm and hardware load balancing. Part of the content is excerpted from reading notes. three, load
, therefore, when a heavy Web server is used as a cluster node service (such as an Apache server), the algorithm will have a discount on the load balancing effect. To reduce this adverse effect, you can set the maximum number of connections for each node (expressed by Threshold setting ).
3.4 least missing
In the least missing method, the balancer records request
1. OverviewThe first two times in this article, we have been talking about the construction of Nginx + keepalived. At the beginning of this article, we are going to honor the promise of the previous article, the following two articles we will introduce the Nginx + keepalived and LVS + keepalived to build a highly available load layer system. If you do not know about Nginx and LVS, see my previous two articles, "Architecture Design:
First, the simple introduction of load balancingLoad balancing, also known as load sharing, refers to load balancing by dynamically adjusting the load on the system, and performing the load
options:
Client Affinity: A cookie is assigned to requests from different clients, and then the cookie is followed to identify which server the request should be handled by.
Host name Affinity: Based on host name for sticky processing, there are two provider that can be used:
Microsoft.Web.Arr.HostNameRoundRobin: Ensure that the server is distributed as evenly as possible
Microsoft.Web.Arr.HostNameMemory: Allocate accord
;Short delay, suitable for streaming media and other applications with high latency requirements
High performance with high throughput
The server can receive the real Access source client IP address directly
can only do 4-tier load balancing, 7-tier service cannot be optimized (such as compression, etc.) cannot be used
Need to configure the loopba
-speed cache technology of the proxy server to provide beneficial performance. However, it also has some questions. First, you must develop a reverse proxy server for each service. This is not an easy task. Although the reverse proxy server itself can achieve high efficiency, for each proxy, the proxy server must maintain two connections, one external connection and one internal connection, therefore, the load
becoming a standard protocol for various operating systems. Therefore, VS/TUN is also applicable to backend servers running other operating systems.
● Virtual server via direct routing
Like VS/TUN, The Vs/drlvs host only processes the connection from the customer to the server, and the response data can be directly returned to the customer from an independent network route. This can greatly improve LVSScalability of the cluster system. Compared with VS/TUN, this method has no overhead of IP tun
, bigip pulls the server from the server Queue and does not participate in the allocation of the next user request until it returns to normal.
· Observation mode (observed): the number of connections and response time are based on the best balance between the two items and select a server for the new request. When a server suffers a second-to-7th-layer fault, bigip pulls the server from the server Queue and does not participate in the allocation of th
options:
Client Affinity: A cookie is assigned to requests from different clients, and then the cookie is followed to identify which server the request should be handled by.
Host name Affinity: Based on host name for sticky processing, there are two provider that can be used:
Microsoft.Web.Arr.HostNameRoundRobin: Ensure that the server is distributed as evenly as possible
Microsoft.Web.Arr.HostNameMemory: Allocate accord
options:
Client Affinity: A cookie is assigned to requests from different clients, and then the cookie is followed to identify which server the request should be handled by.
Host name Affinity: Based on host name for sticky processing, there are two provider that can be used:
Microsoft.Web.Arr.HostNameRoundRobin: Ensure that the server is distributed as evenly as possible
Microsoft.Web.Arr.HostNameMemory: Allocate accord
high throughput
The server can receive the real Access source client IP address directly
can only do 4-tier load balancing, 7-tier service cannot be optimized (such as compression, etc.) cannot be used
Need to configure the loopback address on the server
A common scheduling algorithm for server load
OverviewIn our previous Windows platform distributed architecture practice-load balancing, we discussed the load balancing of Web sites through NLB (Network load Balancer) under the Windows platform and demonstrated its effectiveness through stress testing, which can be said
192.168.1.128:3000;}4 requests will have three requests to "9200" ports, 1 to "3000" portsIt is also possible to use the latest version of Nginx in the "Least Connected" and "Ip-hash" load Algorithms5. * * Health Check * *Nginx's health check is for a service that fails to detect a failure and no longer has a new request. Turn off health Check when Max_fails is set to 0Nginx supports the following health parameters:max_fails=3 Check to fail 3 times,
example, the two realservers are both processing 500 connections at the beginning. Before the next request arrives, server1 only disconnects 10 connections, while server2 disconnects 490 servers. However, at this time, it is redirected to server1, server1.
The dynamic scheduling mode is used to calculate who should be scheduled for the next connection based on the RS's busy degree feedback (the dynamic fee
2 to subnet1 and subnet2 respectively. When the last two digits of the two are not the same, the calculation results of the algorithm formula are not equal, at this time, the two connected data streams are sent from different interfaces.
When there are N Nic interfaces, the number of connections sent from the I interface in a certain period of time is (I = ,.... N), the data stream sent by the j-connected interface is, the
load balancing, which is the biggest difference ).
Note:
Use apachectl-K restart to restart Apache every time you modify httpd. conf.
When a user needs to use the sticky session feature for load balancing in a multi-host envi
) abbreviation WLCIn the case of the server performance difference in the cluster system, the scheduler uses the "Weighted least link" scheduling algorithm to optimize the load balancing performance, and the server with higher weights will bear a large proportion of active connection load. The scheduler can automatically inquire about the
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.