Web server cluster Construction

Source: Internet
Author: User
Tags website server ssh port saltstack

Web server cluster Construction

I. Requirement Analysis:

1. overall requirement: Build a highly available website server cluster that can withstand high-concurrency requests and defend against general network attacks. The return of any server does not affect the operation of the entire cluster, it can also monitor the running status of each server in real time.

2. Detailed requirement analysis:

Build the runtime environment based on the following topology as needed:

 
Ii. Detailed function description:

1. Front-end servers use nginx for reverse proxy and load balancing, and keepalive for HA. This part is implemented by centos1 and centos4, centos1 as the master server, and centos4 as the hot backup server. Nginx sends requests to the backend servers based on different request ip addresses, and maintains the sessions of each ip address in the ip hash mode.

2. the backend server is built on centos2 and centos3. apache is used as the web release software, mysql is used as the database, and Django is used as the test webpage. The databases on the two servers can be automatically synchronized.

3. as a Hot Standby server, centos4 has no service traffic and is relatively idle when centos1 is not faulty. Therefore, nfs is configured on centos4 to make it a file sharing server, put the website file on this server.

4. centos5 is used as the monitoring server and runs nagios to monitor the status of each server. When an alarm occurs, the system notifies the administrator of the alarm. In addition, centos5 is also used as a server for saltstack. The software installation, file transmission, command running, and other operations on other hosts are all performed in batches through saltstack.
 
3. Overall deployment description:

1. centos6.4 64-bit version is installed on each server. cobbler is used for automatic batch installation.

2. Each software is installed with the latest stable version. The software that comes with centos must also be upgraded. For example, the python and Centos versions that come with centos for a long time will affect the running of Django.
 
4. Detailed deployment description:

1. nginx settings

Nginx is mainly used for reverse proxy, caching between users and servers, and sending requests to the backend servers in the form of Server Load balancer.

Install nginx through compilation and installation. The specific process is not described here. However, to better resist intrusion, we recommend that you modify the installation file before compilation to make it difficult for intruders to find out the nginx version number, modify nginx in the installation file. h. Modify the related fields as follows:
NGINX_VERSION "1.0" NGINX_VER "webserver" NGINX_VERSION
 
The Nginx configuration file is as follows:

212800
Default_type application/octet-zone = 20;
Send_timeout 20;
Client_body_buffer_size 1 k;
Client_header_buffer_size 1 k;
Large_client_header_buffers 2 1 k;
Server_tokens off;
Client_body_timeout 20;
Client_header_timeout 20;
Ip_hash;
Server 10.0.0.3;
Server 10.0.0.280 /// 100;
Limit_rate 500 k; 500 502 503 504/=/

Because the backend test webpage is written in Django, nginx does not need to process requests from dynamic web pages such as php.
After the configuration is complete, set nginx to start at boot. You can use the following method:
Echo "/usr/local/nginx/sbin/nginx">/etc/rc. local
 
2. Keeplive settings

Keeplive is used to implement HA through VRRP protocol between two servers (centos1 and centos4. By virtualizing A Virtul IP address (192.168.48.138 in this example), you can publish your business externally. If either of the two servers is out of service, keeplive automatically transfers the service to another server. Here, centos1 is the master server and centos4 is the backup server. In normal times, data traffic will only pass through centos1. Only when centos1 is out of service can the data traffic be cut to centos4.

Keepalive configuration on centos1:

! Configuration File @ test.com @ 127.0.0.130
Virtual_router_id 425942548.139
Priority100
Advert_int 1 1111192.168.48.138

 
For the slave server centos4, you only need to modify part of the above configuration file:
192.168.48.14090
 
Similarly, after the configuration file is configured, add keepalive to the startup.

For keepalive, because the VRRP protocol is used to check whether the peer is online, as long as the Peer can ping, keeplive will think that the peer is online. However, there is also a situation where the peer server is not down, but nginx is disabled due to an error. In this case, keepalive still judges that the peer is online and does not cut the service to the backup server. Therefore, we need to run a script on the server to monitor the status of the nginx process. If the nginx process is disabled, restart nginx first. When the process cannot be restarted, stop the keepalive process to cut the service to the slave end. The script is as follows:

= 'Ps-C nginx -- no-header | wc-(nginxStatus = 0/usr/local/nginx/sbin/= 'ps-C nginx -- no-header | wc -(nginxStatus = 0/etc/init. d/

KeepaliveStatus = 'ps-C keepalived -- no-header | wc-(keepaliveStatus = 0/etc/init. d/5

Place the script on centos1 and set it to run in the background after it is started:
Echo "nohup ~ /NginxStatus. sh & ">/etc/rc. local
 
Because the while in the script will keep repeating to check the running status of the process, there is no need to add it to the scheduled task. However, to avoid unexpected stop of the script, you can also set it to run every 30 minutes in the scheduled task.
 
3. backend Web Server Settings

The backend web server runs on centos2 and centos3 and is built using apache. The test webpage is written using Django, And the webpage file is stored on the shared file server centos4, the/var/www/html directories mounted on the local machine respectively.

Use saltstack to install apache, apache-devel, wsgi, Django, mysql, and upgrade python on two servers in batches. The process is omitted.
After the environment is deployed, modify the apache configuration file (http. conf) and change the running username and group name to apache. The root directory points to/var/www/html.

Next, set mysql and set the mysql of the two servers as master and slave, so that the database of one server is changed and synchronized to the other server immediately to ensure data consistency between the two servers. The procedure is as follows:

First, set centos2

Open mysql configuration file my. cnf on centos2 and make the following changes:

-Bin = MySQL-bin server-id = 1 binlog-ignore-db = information_schema

 
Log on to mysql as the root user and run the following command:
Grant replication slave on *. * to 'test' @ '10. 0.0.3 'identified' test123 ';
 
This command is used to create a test user on the 10.0.0.3 machine with the password test123. This user has the permission to synchronize all the tables of all databases on centos2 to centos3 (10.0.0.3.

Then execute:
Show master status;

You can see the following information:

 
Record this information, log on to mysql in centos3 as root, and execute the following commands (Note that if the database to be synchronized before synchronization is not empty, add a read lock to the database to import the database of the master server to the slave database ):
= '10. 0.0.2 ==== 'mysql-bin.000070 = 106;
 
The above master_log_file and master_log_pos information are the information displayed after running the show master status Command on centos2. Run the show slave status command to check whether centos3 has successfully become a slave server:

 
We can see that mysql of centos3 has become the slave server of centos2 (10.0.0.2. So far, mysql in centos3 can actively synchronize data in centos2. We only need to perform the above operations on centos3 once (first modify my. cnf, where the server-id must be set to 2, and then the user is created and the authorization command is executed), the two databases are mutually master-slave and automatically synchronized with each other.
 
4. nfs

In this step, we need to configure centos4 as the server for sharing files. The webpage content of the two backend web servers will be placed on the shared file server, this ensures that the webpage files of the two web servers are consistent.
 
Nfs is installed on centos6.2 by default. First, create an exports file in the/etc directory of centos4 and add the following content:
/Home/apache/html 10.0.0.2 (rw, sync)/home/apache/html 10.0.0.3 (rw, sync)
 
Then, execute the following command step by step on centos4:

-G apache-s/sbin/-p/home/apache/chown-R apache: apache/home/apache/-R 700/home/apache/-- level 35
 
The permission settings for the shared directory are based on the two backend web servers.

Apache is executed by the apache: apache user, and file read/write is also implemented by this user. To ensure data security, the shared directory only opens permissions to this user, no permissions are granted to other users. In addition, the user of the shared folder must be set to apache: apache.

Next, go to the centos2 and centos3 web servers and make the following settings to enable them to automatically load the shared directory at startup:
Echo>/etc/rc. local
 
In the above options, hard indicates that the server will continue to be connected when the network is interrupted for a short time, and no error message is displayed;

Bg indicates that if the mount fails, the system will transfer the mount operation to the background and continue the mount attempt until the mount is successful;

Nfsvers = 3 indicates nfs 3rd is used.
 
5. nagios settings

The monitoring software is installed on centos5 and monitors the other four servers. When an exception occurs, an email is sent to the Administrator for an alert.
 
Installation Process: Install the nagios, nagios plug-in packages, nrpe, apache (used to build monitoring webpages), and pnp (used to generate monitoring data analysis charts) on centos5 ). The nagios plug-in package and nrpe must be installed on the four monitored hosts. The detailed installation process is omitted here.
The following items need to be monitored for all four servers:
1. Check Swap: monitor the remaining space of the Swap Partition
2. Check Zombie Procs: Number of Zombie processes monitored
3. Total Processes: monitor the Total number of Processes
4. check-no_alowed_user: Monitor for unauthorized user login
5. check-system-load: Monitor system load
 
The working principle of nagios is: services under the nagios directory of the server. the cfg file defines which monitoring item of each client needs to be monitored, and the corresponding monitoring script is executed on the client. The execution result is fed back to the nagios server through the nrpe daemon on the client.
By default, the value returned by the monitoring script indicates the following:
OK-exit code 0-indicates that the service works normally.
WARNING-exit code 1-indicates that the service is in the WARNING state.
CRITICAL-Exit Code 2-indicates that the service is in a dangerous state.
UNKNOWN-exit code 3-indicates that the service is UNKNOWN.
 
Therefore, to implement the preceding five monitoring items, first modify the services. cfg file on the nagios server (centos5) and add the following content:

--! Check_no_allowed_user
Notifications_enabled 1
Flap_detection_enabled 0
Notification_options w, c, r
Icationication_interval 5
Icationication_period 24x7-service, services-pnp-system -! 105-service, services -! 105-service, services -! 105-service, services -! 105
 
The description of the centos1 monitoring service is provided above. The descriptions of the other three servers are the same, but host_name is different. The rest is not listed here due to space limitations.

Next, modify the settings of the client. Take centos1 as an example to modify the nrpe in the nagios directory. cfg file, add the following content (some content already exists by default, You Need To note whether there are duplicates when modifying ):

Command [check_swap] =/usr/local/nagios/libexec/check_swap-w 20%-c 10% command [check_load] =/usr/local/nagios/libexec/check_load-w 1.8, 1.5, 1.2-c 2.5, 2, 1.8 =/usr/local/nagios/libexec/check_procs-w 5-c 10-=/usr/local/nagios/libexec/check_procs-w 150-c 200 =/usr/ local/nagios/libexec/check_no_allowed_user.py-a cjyfff
 
In the script corresponding to the above five commands, check_no_allowed_user.py is the script I wrote, and the other four are the built-in ones of nagios. Check_no_allowed_user.py is used to check whether users other than users are allowed to log on to the system. The method is to add option-a after the script, and then add the list of users allowed to log on (Format:-a user1, user2 ..., root has been added to the allowed list by default, so no need to add root ). When a user other than the user list is allowed to log on, the nagios critial alarm is triggered. The script content is as follows:
 

3 = OS. popen (). read ()


B = list (a. split (= I
 
In addition, for the two front-end servers centos1 and centos4, you also need to add two monitoring services:
80 port: monitoring port 80
CheckNginxState: monitor whether the nginx process is started
 
Similarly, take centos1 as an example. Modify services. cfg on centos5 and add the following content:

-Service, services-80-

 
Modify nrpe. cfg on centos1 and add the following content:
Command [check_nginx] =/usr/local/nagios/libexec/check_nginx.sh
 
Here, check_nginx.sh is also a script compiled by myself. The script content is as follows:

Use_age = ($
Echo 3 = 'ps-C nginx -- no-header | wc-(! = 002

 
For the centos4 shared file server, you also need to add a monitoring item to monitor the disk capacity. Here, you can use the check_disk script that comes with nagios. The method is the same as above, I will not describe it here.

For centos2 and centos3 as backend web servers, it is also necessary to monitor whether apache and mysql are running on the server. The monitoring script is very simple. You only need to modify "a = 'ps-C nginx -- no-header | wc-L'" in check_nginx.sh above, replace "nginx" with "httpd" and "mysqld.

Finally, the client has configured monitoring settings. As a server, you can also add monitoring content for centos5. The monitoring script on the server is also placed under libexec/on the server, and the monitoring service is on localhost. as defined in cfg.

Remember to change the user to nagios: nagios after adding a self-written script and add the execution permission.

After the configuration is complete, open the nagios monitoring page as follows:

 
6. Security Settings

To reduce the risk of brute-force password cracking, change the ssh port from the default port 22 to another port on each server (in this example, change to port 2002 ), and in/etc/hosts. in allow, specify the ip address that allows sshd communication. in/etc/hosts. add sshd: ALL to deny
Considering that the servers are in the Intranet environment and many attacks on the Internet can be isolated, and enabling the firewall will affect the data forwarding speed between servers, I disable iptables on these four servers, enabled only on centos5. In the actual production environment, the firewall in front of the cluster further protects the cluster.

The iptables script of centos5 is as follows:

! Iptables ------ a input-I lo -- A OUTPUT-o lo -- A INPUT-p icmp-j ACCEPTiptables-A INPUT-s 192.168.48.139 -- A INPUT-s 192.168.48.140 -- A INPUT-s 10.0.0.2 -- a input-s 10.0.0.3 -- a input-p tcp-m multiport -- dport 22,80-/etc/rc. d/init. d/iptables save

Conclusion 5:

Finally, I tested the website with apache benchmark. Although it is in a virtual machine environment, the test data has no great reference value, but through testing, you can also see that in a highly concurrent environment, those servers have the largest load. After testing, although the webpage can still be opened in a 100 concurrent environment, nagios has lost the response from the two backend web servers, prompting that the request timeout is returned, the load on the other two servers is still very low, indicating that in highly concurrent environments, the backend web servers have the largest load and are also the objects to be optimized.

There is a deficiency in this experiment environment, that is, shared files are stored on only one server, which is easy to cause a single side. We recommend that you use a distributed storage system, such as MFS, if conditions permit.

In addition, there are still two points to be followed up during the construction of the experiment environment:

1. Optimize apache performance so that the backend server can cope with larger concurrency.

2. The master-slave synchronization of Mysql has a delay problem, which may cause inconsistency between the master and slave databases. You can use the plug-ins to view online information. Next, try to use these plug-ins.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.