Build a high-availability web server with DNS + squid + nginx + mysql under CentOS6.4

Source: Internet
Author: User
Tags gopher test web server

I. What is Squid?

Squid is a software used to buffer Internet data. In this way, it implements its function by accepting requests from the target (object) that people need to download and processing these requests as appropriate. That is to say, if a person wants to download a web page, he requests Squid to get it for him. Squid then connects to the remote server and sends a request to this page. Then, Squid explicitly aggregates data to the client machine and copies the data at the same time. When someone needs the same page, Squid can simply read it from the disk, and the data will be transmitted to the client immediately. The current Squid can process HTTP, FTP, GOPHER, SSL, WAIS, and other protocols. However, it cannot process POP, NNTP, RealAudio, and other types of things.

Definitions of squid proxies

Forward proxy

A. Standard Proxy Buffer Server

A standard Proxy Buffer service is used to cache static web pages (such as html files and image files) to a host on a local network (that is, a proxy server ). When the cached page is accessed for the second time, the browser will directly obtain the request data from the local proxy server instead of requesting data from the original web site. This saves valuable network bandwidth and increases the access speed. However, to implement this method, you must specify the IP address and port number of the proxy server in the browser of each internal host. Each time a client accesses the Internet, the request is sent to the proxy server for processing. The proxy server determines whether to connect to the remote web server to obtain data based on the request. If a target file exists in the local buffer, you can directly pass the file to the user. If not, retrieve the file first, save a buffer locally, and then send the file to the client browser.

B. Transparent Proxy Buffer server (usually installed on LAN gateway and used with Firewall reject)

The transparent Proxy Buffer service and standard proxy server have the same functions. However, proxy operations are transparent to the client browser (that is, you do not need to specify the IP address and port of the proxy server ). The transparent Proxy Server blocks network communication and filters outbound HTTP (port 80) traffic. If the client request is buffered locally, the buffered data is directly sent to the user. If there is no buffer locally, the request is sent to the remote web server. The remaining operations are identical to the standard proxy server. For Linux operating systems, transparent proxy is implemented using Iptables or Ipchains. Because you do not need to make any settings on the browser, transparent proxy is particularly useful for ISP (Internet server provider.

Reverse Proxy

A. Reverse Proxy Buffer Server

Reverse Proxy is a proxy service that is completely different from the first two proxies. It can reduce the load of the original WEB server. The reverse proxy server undertakes the request for the static page of the original WEB server to prevent the original server from being overloaded. It is located between the local WEB server and the Internet. It processes all requests to the WEB server and organizes direct communication between the WEB server and the Internet. If the webpage requested by an Internet user is buffered on the proxy server, the proxy server directly sends the buffered content to the user. If there is no buffer, a request is sent to the WEB server first to retrieve the data, and then the local cache is sent to the user. This method reduces the number of requests sent to the WEB server and the load on the WEB server.

Ii. System Architecture

1. Principles

DNS round robin technology is used to distribute requests from clients to one of the Squid reverse proxy servers for processing. If this Squid caches the user's request resources, the requested resource is directly returned to the user. Otherwise, Squid sends the request to the neighbor Squid and the background WEB server for processing according to the configured rules, which reduces the load on the background WEB server, it also improves the performance and security of the entire website.

2. Host allocation:

DNS server: Enable two NICs and connect two network segments

Eth0: 10.10.54.150

Eth1: 172.16.54.254 (as the gateway of 172.16.54.0/24)

Two squid reverse proxy servers

Squid1: 172.16.54.150

Squid2: 172.16.54.151

Two webserver (install discuz_x3.0_ SC _utf8.zip)

Web1: 172.16.54.200

Web2: 172.16.54.201

Three mysql servers (one master and two slave servers)

Master: 172.16.54.203

Slave1: 172.16.54.204

Slave2: 172.16.54.205

Iii. Memory Optimization

Edit the sysctl. conf file and add the following content:

Shell> vi/etc/sysctl. confnet. ipv4.tcp _ rmem = 4096 87380 ipv4net. ipv4.tcp _ wmem = 4096 65536 ipv4net. core. wmem_default = 8388608net. core. rmem_default = 8388608net. core. rmem_max = 16777216net. core. wmem_max = 16777216net. core. netdev_max_backlog = 262144net. core. somaxconn = 262144net. ipv4.tcp _ max_orphans = 3276800net. ipv4.tcp _ max_syn_backlog = 8192net. ipv4.tcp _ max_tw_buckets = 5000net. ipv4.tcp _ ti Mestamps = 0net. ipv4.tcp _ synack_retries = 1net. ipv4.tcp _ syn_retries = 1net. ipv4.tcp _ tw_recycle = 1net. ipv4.tcp _ tw_reuse = 1net. ipv4.tcp _ mem = 786432 1048576 1572864net. ipv4.tcp _ fin_timeout = 30net. ipv4.tcp _ keepalive_time = net. ipv4.ip _ local_port_range = 1024 65000 # Explanation of configuration options: net. ipv4.tcp _ rmem = 4096 87380 4194304: TCP read buffer. The recommended value is 32768 436600 873200net. ipv4.tcp _ wmem = 4096 65536 4194304: TCP writes to buf Fer, which can be referenced in the optimization value: 8192 436600 873200net. core. wmem_default: the default value of the buffer size of the sending socket (in bytes). net. core. rmem_default: the default value of the buffer size of the received socket (in bytes). net. core. rmem_max: Maximum buffer size of the received socket (in bytes). net. core. wmem_max: maximum size (in bytes) of the buffer for sending sockets. net. core. netdev_max_backlog = 262144: the maximum number of packets that can be sent to the queue when each network interface receives packets faster than the kernel processes these packets. Net. core. somaxconn = 262144: the backlog of the listen function in the web application will give us the net. core. somaxconn is limited to 128, while NGX_LISTEN_BACKLOG defined by nginx is 511 by default, so it is necessary to adjust this value. Net. ipv4.tcp _ max_orphans = 3276800: the maximum number of TCP sockets in the system is not associated with any user file handle. Net. ipv4.tcp _ max_syn_backlog = 8192: the length of the SYN queue. The default value is 1024. The length of the queue is 8192, which can accommodate more network connections waiting for connection. Net. ipv4.tcp _ max_tw_buckets = 5000: indicates the maximum number of TIME_WAIT sockets that the system maintains at the same time. If this number is exceeded, the TIME_WAIT socket is immediately cleared and warning information is printed. Reduce the maximum number of Squid servers to prevent them from being dragged to death by a large number of TIME_WAIT sockets. Net. ipv4.tcp _ timestamps = 0: The timestamp can avoid serial number winding. A 1 Gbit/s link will certainly encounter a previously used serial number. The timestamp allows the kernel to accept such "abnormal" packets, and it needs to be switched off here. Net. ipv4.tcp _ tw_recycle = 1: enables fast TIME-WAIT sockets recovery in TCP connections. Net. ipv4.tcp _ tw_reuse = 1: enables reuse and allows TIME-WAIT sockets to be reused for New TCP connections. Net. ipv4.tcp _ mem = 786432 1048576 1572864: there are also three values, net. ipv4.tcp _ mem [0]: lower than this value, TCP has no memory pressure; net. ipv4.tcp _ mem [1]: Enter the memory Pressure Stage under this value; net. ipv4.tcp _ mem [2]: higher than this value, TCP rejects socket allocation. It can be adjusted based on the physical memory size. If the memory is large enough, it can be adjusted accordingly. 94500000 915000000 927000000 is recommended. Net. ipv4.tcp _ fin_timeout = 30: indicates that if the socket is disabled by the local end, this parameter determines the time it remains in the FIN-WAIT-2 state. Net. ipv4.tcp _ keepalive_time = 1200: indicates the frequency of keepalive messages sent by TCP when keepalive is in use. The default value is 2 hours, which is changed to 20 minutes. Net. ipv4.ip _ local_port_range = 1024 65000: indicates the port range used for external connection. The default value is small: 32768 to 61000, Which is changed to 1024 to 65000. # Make the configuration take effect immediately:/sbin/sysctl-p

4. Configure the DNS server for Round Robin

Shell> yum install bind-utils

Shell> vim/etc/named. conf

# Install the DNS server and configure named. confshell> vim/etc/named. confoptions {listen-on port 53 {any ;}; allow-query {any ;}; recursion yes; # rrset-order indicates to use the round robin rrset-order {class IN typeA name "bbs.centos.com" order cyclic ;}; zone "for IN records IN the bbs.centos.com domain ". "IN {typehint; file" named. ca ";}; zone" centos.com "IN {typemaster; file" named.lij.com ";}; zone" 54.16.172.in-addr. arpa "IN {typemaster; file" named.172.16.54 ";};# configure the forward domain shell> vim/var/named/named.centos.com $ TTL 600 @ in soa centos.com. ftp. (20110804043H15M1W1D) @ in ns ceontos.com.centos.com. in a 10.10.54.150 @ in mx 10 mail.ceontos.com.mail.centos.com. in a 10.10.54.151bbs.centos.com. in a 172.16.54.150bbs.centos.com. in a 172.16.54.151 # restart shell>/etc/init. d/named restart # test whether DNS polling takes effect. ping bbs through pingshell twice. centos. comPING bbs.centos.com (172.16.54.151) 56 (84) bytes of data.64 bytes from 172.16.54.151: icmp_seq = 1 ttl = 64 time = 0.441 ms # pingshell again> ping bbs. centos. comPING bbs.centos.com (172.16.54.150) 56 (84) bytes of data.64 bytes from 172.16.54.150: icmp_seq = 1 ttl = 64 time = 0.019 ms # The two ping IPs are different. Note: the DNS server of the test server must be configured as 10.10.54.150.

5. Configure two squid servers

1. Compile and install squid

Shell> yum installgcc wget perl gcc-c ++ make

Shell> cd/tmp

Shell> wget http://www.squid-cache.org/Versions/v3/3.1/squid-3.1.19.tar.gz

Shell> tarxzf squid-3.1.19.tar.gz

Shell> cd squid-3.1.19

Shell>. /configure -- prefix =/usr/local/squid -- enable-gnuregex -- disable-carp -- enable-async-io = 240 -- with-pthreads -- enable-storeio = ufs, aufs, diskd -- disable-wccp -- enable-icmp -- enable-kill-parent-hack -- enable-cachemgr-hostname = localhost -- enable-default-err-language = Simplify_Chinese -- with-maxfd = 65535 -- enable-poll -- enable-linux-netfilter -- enable-large-cache-files -- disable-ident-lookups -- enable-default-hostsfile =/etc/hosts -- with-dl -- with-large-files -- enable-delay-pools -- enable-snmp -- disable-internal-dns -- enable-underscore-enable-arp-acl

Shell> make & makeinstall

2. Create the squid cache directory and log directory

Shell> groupadd squid

Shell> useradd-g squid-s/sbin/nologinsquid

Shell> mkdir/squid/data-p

Shell> mkdir/squid/log

Shell> chown-R squid. squid/squid

3. Edit the squid configuration file and configure it as the reverse proxy mode. Load the two web Servers

Shell> vim/usr/local/squid/etc/squid. conf # user and user group cache_inclutive_user inclusquid # host name visible_hostname squid1.lij.com # configure squid as reverse proxy mode http_port 172.16.54.150: 80 accel vhost vport # configure squid2 as its neighbor, when squid1 does not find the requested resource in its cache, obtain the cache icp_port 3130cache_peer 172.16.54.151 sibling 80 3130 # configure squid1's two parent nodes (web server) in its neighbor through the ICP query ), the originserver parameter indicates the source server, and the round-robin parameter indicates that squid distributes requests to one of the parent nodes through polling; squi D. Check the health status of these parent nodes. If the parent node is down, then squid will capture data from the remaining origin server cache_peer 172.16.54.200 parent 80 0 originserver round-robincache_peer 172.16.54.201 parent 80 0 originserver round-robin # below are some access control, log and cache directory settings cache_mem 128 MBmaximum_object_size_in_memory 4096 KBmaximum_object_size 10240 KBcache_dir aufs/squid/data 4000 16 512coredump_dir/squid/data # Log Path cache_access_log/squid/logs/access. logcache _ Log/squid/logs/cache. logcache_store_log/squid/logs/store. logacl localnet src 10.10.54.0/24http_access allow allicp_access allow localnetrefresh_pattern ^ ftp: 1440 20% 1440 10080refresh_pattern ^ gopher: 0% 1440refresh_pattern-I (/cgi-bin/| \?) 0 0% 0refresh_pattern. 0 20% 4320

4. The configurations on squid2 are exactly the same as those on squid1. You only need to modify the relevant IP address, for example

Visible_hostname squid2.lij.com

Http_port 172.16.54.151: 80 accel vhost vport

Icp_port 3130

Cache_peer 172.16.54.150 sibling 80 3130

Note: The hosts record must be added to both squids.

Shell> vim/etc/hosts

172.16.54.200 squid1.lij.com

172.16.54.201 squid2.lij.com

6. install three mysql servers and configure the MHA high-availability architecture

1. MHA Introduction: MHA is a Mysql failover solution written by Mysql experts in Japan using Perl to ensure high Database Availability, its function is to implement master Mysql failover (failover) within 0 to 30 s, that is, once the master server goes down, the backup server starts to act as the master to provide services, this ensures that our web servers can continue to operate.

2 .. for the MHA environment setup process, refer to my other article: http://3974020.blog.51cto.com/3964020/1394246 (this article uses four servers, one manager, one master, one of the two slave as the standby master ), three hosts can also implement the MHA environment. You only need to add one slave host as the manager.

3. The final MHA Structure

Master: 172.16.54.203

Slave1: 172.16.54.204 (standby master, when the master is down, slave1 acts as the master to continue providing services)

Slave2: 172.16.54.205 (two roles, one is to synchronize master data as slave2, and the other is to monitor whether the master is normal as a manager node)

7. Configure the web server and install Forum Discuz

1. When we select nginx for the web server, we first need to install the LNMP environment. This is skipped. Only the nginx configuration parameters are provided below. Some parameters about nginx performance optimization are not provided.

2. shell> vim/usr/local/nginx/conf/nginx. conf

# User, which must be the same as the user in the web root directory: apache; worker_processes 2; error_log logs/error. log; pid logs/nginx. pid; events {worker_connections 1024;} http {include mime. types; default_type application/octet-stream; log_format main '$ remote_addr-$ remote_user [$ time_local] "$ request" ''$ status $ response" $ http_referer "'' "$ http_user_agent" "$ http_x_forwarded_for "'; access_log logs/access. log; sendfile off; keep Alive_timeout 65; # gzip on; server {listen 80; server_name bbs.centos.com; # specify the root directory root/var/www/bbs/upload; charset UTF-8; index. php index.html; access_log logs/bbs. access. log; # configure the php module to support location ~ \. Php $ {fastcgi_pass unix:/var/run/php-fpm.sock; fastcgi_index index. php; fastcgi_param SCRIPT_FILENAME $ document_root/$ fastcgi_script_name; include fastcgi_params; include fastcgi. conf ;}# nginx status page, mainly used to monitor location/server-status {stub_status on; allow all; access_log off ;}}

3. the root directory of the web server we configured is/var/www/bbs/upload. First, switch to the previous directory/var/www/bbs.

Shell> cd/var/www/bbs

Shell> unzip Discuz_X3.0_ SC _UTF8.zip

Shell> chown apache. apache-R/var/www/bbs

4. Access http: // 172.16.54.200 in the browser to go to The Discuz Installer. First, check the error and change the error according to the error prompt. This is generally due to directory permission issues.

5. the next step is to create the location where Discuz stores the database and the account used to read the database. Since we have already set up the MHA environment, here we designate the database as the master host (172.16.54.203 ), once the master host is down, slave1 (172.16.54.204) can continue to provide services

8. Test

1. On the test machine, modify the DNS server address to 10.10.54.150, and then access bbs.centos.com in the browser to test whether access is allowed. Then, perform the following steps:

2. test web server: In theory, the squid server will monitor the backend web server. If a server fails, squid will proxy the user request to another server, A web Server can be replaced during the test.

3. Test the mysql server: The MHA architecture of mysql can implement failover. In the test, the master host can be replaced to check whether the master host is successfully switched to slave1 and whether the forum can be accessed.

References:

Http://www.centos.bz/2012/05/squid-reverse-proxy-deploy/

Http://rfyiamcool.blog.51cto.com/1030776/915092

This article is from the "figinging-cluter" blog, please be sure to keep this source http://3974020.blog.51cto.com/3964020/1393353

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.