The first part of the Web layer balanced loadUnder the. NET platform, there are two ways (IIS7 and nginx) for the balanced load that I have deployed, and the following is an example of nginx to explain the balanced load of the web layer.Introduction: Nginx beyond the high performance and stability of Apache, so that the domestic use of Nginx as a Web server website is also more and more, including Sina Blog
First, load Balancing introduction
Main open source software LVs, keepalived, Haproxy, Nginx and so on;
The LVS belongs to 4 layer (network OSI 7 layer model), Nginx belongs to 7 layer, Haproxy can be considered as 4 layer, can be used as 7 layer;
Keepalived load balancing function is in fact LVS;
LVS This 4-layer load
Build mysql server load balancer and high-availability environment bitsCN.com
Build a mysql server load balancer and high-availability environment
Abstract: rhel5.8, mysql, keepalived, and haproxy are used to build a cluster with high availability and load balancing. mysql
This article to share the content is about Nginx and PHP installation and configuration six Nginx reverse proxy and Load Balancer Deployment Guide, has a certain reference value, the need for friends can refer to
1. Locate and open the Conf file
2. Load Balancing ConfigurationNginx upstream by default is a poll-based load
: This article mainly introduces how to build the Nginx + Tomcat + Memcached server load balancer cluster service. For more information about PHP tutorials, see.
Reprinted please indicate the source: http://blog.csdn.net/l1028386804/article/details/48289765
Operating system: CentOS6.5
This document describes how to set up an Nginx + Tomcat + Memcached server load
HAProxy for Server Load balancer
HAProxy provides high availability, Server Load balancer, and TCP and HTTP application-based proxy. It supports Virtual Hosts and is a free, fast, and reliable solution. HAProxy is especially suitable for websites with extremely high loads, which usually require session persistence or l
Introduction to load Balancing clusters
Main open source software LVs, keepalived, Haproxy, Nginx, etc.
The LVS belong to 4 layer (network OSI 7 layer model), Nginx belongs to 7 layer, Haproxy can be considered as 4 layer, can also be used as 7 layer
The Keepalived load balancing function is actually the LVS
LVS This 4-tier load
time between Tomcat. However, this scheme is inefficient and does not perform well in large concurrency. Using Nginx's IP-based hash routing strategy, it is easier to ensure that IP access is always routed to the same tomcat. But if the application is a large number of users of a local area network logged in at the same time, so load balancing does not work. Using memcached to centralize the session of multiple tomcat, the
seconds of delay when receiving the response.
Network Server Load balancer allocates incoming network communication between one or more virtual IP addresses (cluster IP addresses) assigned to the Network Server Load balancer cluster, resulting in changeable performance. Then, the host in the cluster simultaneously r
Recently, the company had a project where users worried that a single machine could not afford the most users, and they required to use the application cluster. We designed the application cluster architecture based on the application situation.
The architecture diagram is as follows:
650) this. width = 650; "style =" border-bottom: 0px; border-left: 0px; margin: 0px; border-top: 0px; border-right: 0px "title =" logical architecture "border =" 0 "alt =" logical architecture "src =" http://www.bk
2.41.0.1217-nov-2004 Stable GNU general public License (GPL)IPVS for Kernel 2.21.0.814-may-2001 Stable GNU general public License (GPL)LVS Load Balancing Scheme
Monitor LVs in a self-scripted way
Heartbeat+lvs+ldirectord is more complicated and difficult to control.
Configure LVS with the tool piranha provided by Redhat
Keepalived+lvs Solution (Recommended)
2.2 LVS Load Balancing
have a short time recently, study a bitNginx
, and made a simple introductory case.
Brief introduction:
Nginx (engine x) is a lightweight Web Server, reverse proxy server, and e-mail (imap/pop3) proxy server.
The Normal mode of client Access server is direct access, with nginx Server, we can deploy the same application to different servers, the access mode is as follows: This greatly improves the concurrency capability, reduces the pressure on the server and improves the performance.
Normal
ngx_posted_accept_events queue can be processed first. After processing, the ngx_accept_mutex lock will be released, and then the time in the ngx_posted_events, this greatly reduces the time occupied by the ngx_accept_mutex lock.
Server Load balancer
When establishing a connection, when multiple sub-processes compete for a new connection time, only one worker sub-process will eventually connect the resume,
config file inside the tomcat2. First we modify the tomcat2/bin below startup.sh and shutdown.sh add the contentExport Java_home=/usr/java/jdk1.7.0_71export path= $PATH: $JAVA _home/binexport classpath=.: $JAVA _home/lib/tools.jar : $JAVA _home/lib: $JAVA _home/binexport catalina_home= $CATALINA _2_home export catalina_base= $CATALINA _2_base And then go in there and change a few places in Tomcat2/server.xml.The 8005->8006,8080->8081,8009->8010 is good (the specific port by itself will be OK)
configuration file is divided into six main areas:Main (Global Settings), events (nginx working mode), HTTP (HTTP settings),Sever (Host settings), location (URL match), Upstream (Load balancer server settings). Main module
Below a main area, he is a global setting:
User nobody nobody; Worker_processes 2; Error_log/usr/local/var/log/nginx/error.log notice; Pid/usr/local/var/run/nginx/nginx.pid; Worker_rlimi
The front-end of an application is an nginx server. All static content is processed by nginx, all PHP requests are distributed to several downstream servers running the PHP FastCGI daemon. In this way, the system load can be apportioned in a cheap way, expand the load capacity of the system.The IP addresses of the thre
1. Yum Installation NginxYum Install Nginx2. Start NginxChkconfig nginx on service nginx startTo put the test file into the Web server:[HTML]View Plaincopy print?
html>
head>
title>welcome to nginx! title>
head>
body bgcolor="white" text="Black">
Center>H1>welcome to nginx! 192.168.232.132H1>Center>
body>
html>
To configure the Load Balancer server:Vi/etc/nginx
1) Open the "httpd.conf" file in the "/usr/local/apache2/conf" directory and add the following configuration item at the end of the file, as shown in Figure 4-2-1.
Proxyrequests OFF
proxypass/balancer://mycluster/
Balancermember ajp://localhost:10009 ROUTE=TOMCAT1
Balancermember ajp://localhost:20009 ROUTE=TOMCAT2
Figure 4-2-1
Description: Where "Mycluster" is the name of the cluster, "ajp://localhost:100
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.