Real-combat nginx load Balancing high redundancy high-availability web architecture

Source: Internet
Author: User
Tags epoll sendfile nginx server zend nginx load balancing

Recently, one of the company's main website revision finished finally on the line, involving me for half a year, and now finally have time to sit down to write something, summed up their technical experience. This time, according to the number and quality of the server, I use load balancing high redundancy architecture, consider single point of failure, the web also abandoned Apache, and use Nginx, database or use master, from the architecture. The architecture currently carries 80W of PV, without great pressure.

Here is a brief talk about the choice of the Web query, is the use of Nginx or Apache, many friends in the planning of the site when there are difficult to choose the problem, even friends in the construction of the early use of Apache later changed to Nginx. Next I say my plan choice, whether the web is using Apache or nginx, I think the pros and cons of the two are very obvious, the final decision is based on your site's own circumstances to determine, such as the main content of the page, the type of website (e-commerce or portal, etc.). In short, after considering the site content, type and so on or can not choose, at this time, large concurrency I choose Nginx, and the dynamic request very frequent and a small number of concurrent sites I use Apache. And my this site is a portal site, the content basically generated static pages, dynamic pages and not much time will be generated HTML, so I choose to use Nginx to do the web.

First of all, this architecture is suitable for the scene: small and medium-sized website scene, roughly estimated 100W PV.

Application environment

Main nginx:192.168.1.158

Web server One: 192.168.1.163

Web server Two: 192.168.1.162

Primary database server: 192.168.1.159

From database + Standby nginx:192.168.1.161

Nginx Virtual ip:192.168.1.160

Operating system: CentOS 6.4

First, yum and compile and install the software way to share small

Compile and install or Yum install the software controversy, now there are people to argue about this time. I think there's no need to argue about this, it's impossible to argue about the outcome, to look at the personal work habits, I want to say that Yum really saves time, Yum installs the software as simple, fast, and a bit of a benefit to cultivate new Linux interests. Compile and install can be well-customized, will not need to shut down the function, reduce security risks and so on. Yum installation as long as you turn off unneeded features and compile and install the same, and the compilation of the installation process you have mastered the software installation path, and so on, there are pros and cons, there is no need to waste their mind. And I chose the time I needed, basically I was a person with no time, so I mostly used Yum to install the software.

Second, the master, from MySQL installation and synchronization

Very simple configuration, if interested friends can see my last article on "MySQL Linux high-availability architecture analysis and master, from replication of the actual combat (a)"

Third, PHP installation

Install PHP separately on both Web, and the installation process is very simple.

1. Yum Install php*

Service PHP-FPM Start PHP-FPM

2. Adjust the PHP time zone

Be careful to adjust the PHP time zone and time, if you do not adjust the PHP and the system time does not correspond to the error. To be found in php.ini; date.timezone = and to be removed, changed into

3, Installation Zend Acceleration

Be sure to install, PHP acceleration really can improve the speed of PHP, do not install Zend Also at least a kind of acceleration software.

Previously installed Zend package, so I directly to use, upload and unzip in the specified directory.

Open the PHP.ini configuration file and add the Zend Configuration code at the end

Restart PHP in effect, service php-fpm restart

Iv. installation of Web servers

Because I'm a time-nervous person, I use Yum to install it. Centos 6.4 itself of the Yum source is no nginx installation package, so I first change the source for atomic, and then Yum installation.

wget http://www.atomicorp.com/installers/atomic, direct download

1. Exchange Source

Sh./atomic

The default installation, you will see/etc/yum.repos.d/will have Atomic.repo

For the replacement source to take effect immediately, enter Yum check-update to let Yum check it by itself.

2. Installing Nginx

Yum Install Nginx

After installing the Nginx version is 1.6.2-23, it is I want to install 1.6.

Modify the configuration Mv/etc/nginx/nginx.conf/etc/nginx/nginx.bak to establish the virtual host directory mkdir–p/etc/nginx/vhosts/

Write this well-adjusted code to the new nginx.conf

##############################################################

#

# This is the main Nginx configuration file.

#

# More information about the configuration options are available on

# * The 中文版 Wiki-http://wiki.nginx.org/main

# * The Russian documentation-http://sysoev.ru/nginx/

#

#######################################################################

#----------------------------------------------------------------------

# Main module-directives that cover basic functionality

#

# Http://wiki.nginx.org/NginxHttpMainModule

#

#----------------------------------------------------------------------

User root root;

Worker_processes 8;

Error_log/var/log/nginx/error.log;

Pid/var/run/nginx.pid;

#Specifies the value for maximum file descriptors the can is opened by this process.

Worker_rlimit_nofile 65535;

Events

{

Use Epoll;

Worker_connections 65535;

}

http

{

Include/etc/nginx/mime.types;

Default_type Application/octet-stream;

SSI on;

Ssi_silent_errors on;

Ssi_types test/shtml;

#charset gb2312;

Server_names_hash_bucket_size 128;

Client_header_buffer_size 32k;

Large_client_header_buffers 4 32k;

Client_max_body_size 8m;

Sendfile on;

Tcp_nopush on;

Keepalive_timeout 60;

Tcp_nodelay on;

Fastcgi_connect_timeout 300;

Fastcgi_send_timeout 300;

Fastcgi_read_timeout 300;

Fastcgi_buffer_size 64k;

Fastcgi_buffers 4 64k;

Fastcgi_busy_buffers_size 128k;

Fastcgi_temp_file_write_size 128k;

Fastcgi_intercept_errors on;

gzip on;

Gzip_min_length 1k;

Gzip_buffers 4 16k;

Gzip_http_version 1.0;

Gzip_comp_level 2;

Gzip_types text/plain application/x-javascript text/css application/xml;

Gzip_vary on;

Log_format Main ' $remote _addr-$remote _user [$time _local] "$request" '

' $status $body _bytes_sent ' $http _referer '

' "$http _user_agent" "$http _x_forwarded_for";

Access_log/var/log/nginx/access.log main;

include/etc/nginx/vhosts/*.conf;

}

3. Create a site site in/etc/nginx/vhosts/

Vi www.conf

Server

{

Listen 80;

server_name www.test.com;

Index index.html index.php;

root/www/web_www;

Location ~. *\. (PHP|PHP5)? $

{

#fastcgi_pass Unix:/tmp/php-cgi.sock;

Fastcgi_pass 127.0.0.1:9000;

Fastcgi_index index.php;

Include fastcgi.conf;

}

}

Two Web servers do the same configuration.

V. Implementation of load Balancing

Nginx load balance is still very stable, even do not need to monitor the nginx process survival, I maintain a few stations basic one or two times a year, so there is no writing or use other people to write the monitoring process script, but remind you friends, because the site, there is no what should not have, It is best and most stable to suit oneself.

1, load-balanced Nginx installation and configuration

Two nginx installed Nginx, the process is no longer detailed description, paste the configuration file for the reference of friends.

User root root;

Worker_processes 8;

Pid/var/run/nginx.pid;

Worker_rlimit_nofile 65535;

Events

{

Use Epoll;

Worker_connections 65535;

}

http{

Include Mime.types;

Default_type Application/octet-stream;

Server_names_hash_bucket_size 128;

Client_header_buffer_size 32k;

Large_client_header_buffers 4 32k;

Client_max_body_size 8m;

Sendfile on;

Tcp_nopush on;

Keepalive_timeout 60;

Tcp_nodelay on;

Fastcgi_connect_timeout 300;

Fastcgi_send_timeout 300;

Fastcgi_read_timeout 300;

Fastcgi_buffer_size 64k;

Fastcgi_buffers 4 64k;

Fastcgi_busy_buffers_size 128k;

Fastcgi_temp_file_write_size 128k;

gzip on;

Gzip_min_length 1k;

Gzip_buffers 4 16k;

Gzip_http_version 1.0;

Gzip_comp_level 2;

Gzip_types text/plain application/x-javascript text/css application/xml;

Gzip_vary on;

Log_format Main ' $remote _addr-$remote _user [$time _local] "$request" '

' $status $body _bytes_sent ' $http _referer '

' "$http _user_agent" $http _x_forwarded_for ';

Access_log/var/log/nginx/access.log main;

Upstream Backend {

Server 192.168.1.162;

Server 192.168.1.163;

}

server {

Listen 80;

server_name www.test.cn;

Location/{

root/var/www;

Index index.htm index.html index.php index.shtml;

Proxy_redirect off;

Proxy_set_header Host $host;

Proxy_set_header X-real-ip $remote _addr;

Proxy_set_header x-forwarded-for $proxy _add_x_forwarded_for;

Proxy_pass Http://backend;

}

}

}

2. Yum Installation keepalived

Yum Install keepalived

3, the main keepalived configuration

Vi/etc/keepalived/keepalived.conf

! Configuration File for Keepalived

Global_defs {

router_id Lvs_devel

}

Vrrp_instance Vi_1 {

State MASTER

Interface eth1

VIRTUAL_ROUTER_ID 51

Priority 99

Advert_int 1

Authentication {

Auth_type PASS

Auth_pass test.cn

}

virtual_ipaddress {

192.168.1.160

}

}

Service keepalived Start

The results of the post IP add are as follows, and the global IP is successful.

4, the configuration of the Deputy Keepalived

! Configuration File for Keepalived

Global_defs {

router_id Lvs_devel

}

Vrrp_instance Vi_1 {

State BACKUP

Interface eth1

VIRTUAL_ROUTER_ID 51

Priority 99

Advert_int 1

Authentication {

Auth_type PASS

Auth_pass test.cn

}

virtual_ipaddress {

192.168.1.160

}

}

The test process is as follows:

1. Restart the main nginx and the Nginx server, observe the website access, and also ping the virtual IP address 192.168.1.160来 to see if the site can be visited normally.

Through Ping is more intuitive, only lost 1 packages, the website is able to open normally.

2. Restart the WEB1 and WEB2 servers respectively to observe the website visit.

Check the opening of a site, open the full.

Results: If the two-step test results do not affect the site's normal access, the load-balanced architecture is successful, congratulations, is not a kind of success after the small joy.

Note the point:

1, two Web server nginx are set worker_connections 65535; The CentOS kernel must be optimized, otherwise it will not work.

Vi/etc/sysctl.conf

On the last side of the sysctl.conf code, add

# ADD

Net.ipv4.tcp_max_syn_backlog = 65536

Net.core.netdev_max_backlog = 32768

Net.core.somaxconn = 32768

Net.core.wmem_default = 8388608

Net.core.rmem_default = 8388608

Net.core.rmem_max = 16777216

Net.core.wmem_max = 16777216

Net.ipv4.tcp_timestamps = 0

Net.ipv4.tcp_synack_retries = 2

Net.ipv4.tcp_syn_retries = 2

Net.ipv4.tcp_tw_recycle = 1

#net. Ipv4.tcp_tw_len = 1

Net.ipv4.tcp_tw_reuse = 1

Net.ipv4.tcp_mem = 94500000 915000000 927000000

Net.ipv4.tcp_max_orphans = 3276800

Net.ipv4.ip_local_port_range = 1024 65535

Restart the server or/sbin/sysctl–p

2, Nginx log segmentation, if you use my Nginx log part of the configuration, you do not have to change, every night, the early morning will be automatically segmented, but log log path to be placed in the/var/log/nginx directory.

This article is from the "Wine Light" blog, please be sure to keep this source http://hostslinux.blog.51cto.com/8819775/1613387

Real-combat nginx load Balancing high redundancy high-availability web architecture

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.