The previous article written the Nginx load balancer, this article realizes the high Availability (HA). The overall design of the system is to use Nginx to do load balancing, if there is an nginx single-machine failure, the whole system will not function properly. For the high-availability requirements of system architecture design, we need to solve the requireme
Linux Server Load balancer cluster system solution-Linux Enterprise Application-Linux server application information. For details, refer to the following section.
1. Introduction to Linux virtual servers
Linux Virtual Server (LVS) is a high-availability Server Load balancer cluster system. The system can provide
: This article mainly introduces several methods for nginx server load balancer to process session sharing. For more information about PHP tutorials, see.
1) use cookie instead of sessionBy changing the session into a cookie, you can avoid some drawbacks of the session. in a previously read J2EE book, it also indicates that the session cannot be used in the cluster system, otherwise, it will be difficult to
: This article describes how to set up static file separation for NginxTomcat server load balancer. if you are interested in PHP tutorials, refer to it. This article mainly explains how to use Nginx as the front-end web server and tomcat as the backend application server. all Internet requests are forwarded from nginx to the intranet tomcat for processing, that is, nginx Reverse proxy requests to tomcat, or
Lvs-nat model: Similar to Dnat, but supports multi-target forwarding, which is multi-objective DnatIt is forwarded by modifying the destination address of the request message to a certain RS rip selected by the scheduling algorithm.Architectural Features:(1) RS should use a private address, that is, RIP should be a private address, each RS gateway must point to the dip(2) The request message and the response message are forwarded through the Director; in high-
For Web application cluster learning, I started from tomcat5.5. Below are some of my practical operations and experiences.
Section 1 Environment
Server Load balancer
*
Operating System: Windows XP
IP Address: 192.168.1.200
Apache: apache_2.2.13-win32-x86-openssl-0.9.8k.msi
Mod_jk: mod_jk-1.2.28-httpd-2.2.3.so (for Windows)
Cluster Environment tomcat1
*
Operating System: SuSE linuxe Server 10
IP Address: 192
Label: style blog HTTP Io color OS AR for SP The last two lectures are mainly about the nginx environment, which does not involve the development of the real environment. In this example, describe how to configure nginx for the Server Load balancer and WWW server and how to implement it. The following is an actual scenario: One Server Load
Tags: blog HTTP use 2014 C log R server TTServer Load balancer and redundancy are two different technologies that can be used independently, but they can play a powerful role only when combined. Generally, they are both used in combination.Server Load balancer (SLB): distributes traffic to servers. It is also called ro
Nginx ~ Add Server Load balancer for docker containers, nginxdocker
As the most popular Server Load balancer and reverse proxy server, Nginx runs on linux. to achieve traffic distribution and load balancing, You need to deploy on IIS of multiple application servers, use some
Considering the shortcomings of LVS and Nginx (because LVS uses synchronous request forwarding policy and Nginx is the asynchronous forwarding policy, combined with the disadvantage of both: as the Load Balancer server nginx and LVS processing the same request, all requests and response traffic will go through the Nginx server, However, when using LVS, only request traffic through the LVS network, the respo
corresponds to upstream localhost{}.After the above steps, the load Balancer configuration is completed, the following to start Tomcat_1, tomcat_2, and then double-click the nginx root directory Nginx.exe file or use start Nginx boot (shutdown is: nginx-s stop), open the browser, Enter Address: http://localhost will be able to see the first page of Tomcat.Learn more about Nginx configuration: http://www.ho
on the tag is obtained through the database link under the section.
Figure 2.1.1: SLB capture settings: initialization page
If I select the first task, make sure that all the prerequisites listed in the check list are met before executing the capture session.
Figure 2.1.2: Server Load balancer settings: scheduled Environment check list
On the following page,
.png "" 532 "Height =" 242 "/>
VCL. Load first6./Default. VCL
VCL. Use first6
Then let the client start to initiate a request and give it a try:
650) This. width = 650; "style =" border-bottom: 0px; border-left: 0px; border-top: 0px; border-right: 0px "Title =" image "border =" 0 "alt =" image "src =" http://img1.51cto.com/attachment/201409/25/6249823_1411657818Ub2f.png "" 537 "Height =" 251 "/> web1 hard to refresh at this time will still be cach
, transfers the request evenly to the different server execution, and the scheduler automatically shields off the server's failure, thereby forming a set of servers into a high-performance, highly available virtual server. The structure of the entire server cluster is transparent to the customer and eliminates the need to modify client and server-side programs. To do this, you need to consider system transparency, scalability, high availability, and manageability at design time.Cluster structure
master-Slave 6 node starts all, start the Web service accessData is also perfectly available. Everything OK so we a SSM Web application +redis cluster +mysql load balancer high availability, high performance, high expansion of the read and write separation architecture is completed, there are many flaws, and then will be self-completion, the architecture also need to use Nginx to
First, load Balancing introduction
Main open source software LVs, keepalived, Haproxy, Nginx and so on;
The LVS belongs to 4 layer (network OSI 7 layer model), Nginx belongs to 7 layer, Haproxy can be considered as 4 layer, can be used as 7 layer;
Keepalived load balancing function is in fact LVS;
LVS This 4-layer load
The server Load balancer of the Oracle RAC server distributes new connection requests to the node with the smallest load based on the number of connection loads of each node in the RAC. When the database is running, the PMON process of each node in RAC updates the connection load of each node to service_register every
Nginx can be used not only as a powerful web server, but also as a reverse proxy server. nginx can also implement dynamic and static page separation according to scheduling rules, server Load balancer can be performed on backend servers in multiple ways, such as round robin, IP hash, URL hash, and weight. Health Check of backend servers is also supported.
If there is only one server and the server goes down
Server Load balancer in ASP. NET sites: Based on the HTTP protocol, we may find that we need to solve two problems: first, to achieve Server Load balancer, we need a Server Load balancer. You can use DNS round robin to obtain diff
SQLRelay0.50 includes some improvements to using SQLServer through FreeTDS, adding parameters to format the date and time, and fixing other bugs and memory leaks. SQLRelay is a persistent database connection pool that provides database connection pools, proxies, and Server Load balancer for Unix or Linux.
SQL Relay 0.50 includes some improvements to using SQL Server through FreeTDS, adding parameters to fo
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.