RHEL 5 LVS Configuration

Source: Internet
Author: User

RHEL 5 LVS Configuration
In enterprise IT cluster applications, the most common architecture includes high-availability clusters and load balancing clusters ). A server Load balancer cluster distributes traffic among multiple servers or applications. The server cluster acts as a virtual server for external applications and customers. the servers in the cluster are evenly located in the Services submitted by external applications and clients, server Load balancer between servers and provides flexible scalability. When the business pressure increases, you can add new servers at any time to improve the overall performance of the cluster. Server Load balancer clusters are especially suitable for high-concurrency network applications, such as websites, file servers, and various high-concurrency socket processing. Generally, most Server Load balancer cluster solutions are implemented through dedicated hardware-"Server Load balancer", but these hardware products are often expensive. Currently, Gigabit Networks are widely used in the server field. However, the current mid-range 10-Gigabit Server Load balancer is still at a high price of nearly 100,000 yuan, so many enterprises are discouraged.
The founder of The LVS (Server Load balancer software, Linux virtual server) project is Dr. Zhang Wenyu from China National University of Defense Technology, this project is one of the few open-source projects maintained by Chinese people that have been added to the Linux kernel. With the continuous development of this project, the function is gradually improved and the performance is constantly improved. Currently, many Hardware load balancers are implemented through Linux kernel and LVS.
There are many LVS configuration tutorials on the Internet. If the LVS Server Load balancer is only a single server, you only need to install the LVS service, if the LVS server is multiple, LVS + keepalived or heartbeat + ldirectord + LVS is generally required to implement hot backup of the LVS Server Load balancer and monitor the status of the application server. The LVS in RHEL 5 Series RHCS provides more powerful functions than LVS + keepalived or heartbeat + ldirectord + LVS Through graphical configuration, next we will take RHEL 5 LVS as an example to explain the basic configuration of LVS.
Lab Topology

1.1 LVS Installation
1. Install RHEL 5 LVS [root @ lvs1 ~] # Yum-y install piranha generally RHEL 5 LVS needs to install the following 14 software packages


2. Set a password for the LVS graphic configuration interface
[Root @ lvs1 ~] # Piranha-passwd
New Password: <enter a custom password, such as RedHat>
Verify: <enter a custom password, such as RedHat>
Adding password for user piranha

3. Enable the RHEL 5 LVS graphic configuration interface [root @ lvs1 ~] # Service Piranha-Gui start
1.2 LVS Configuration
Access the server http: // 192.168.20.101: 3636 where the piranha service is located through a browser, click "login", and log on with the username piranha and the password you just set (such as RedHat), as shown in.

After logging on to the control/monitoring configuration page, The LVS monitoring page is displayed. You can set the monitoring data refresh time and change the management password, as shown in.

Configure global settings click "lobal Settings" to go to the master server information configuration page, as shown in.

Primary server public IP: IP address used by the master server to connect to the Application Server (Real Server. Primary server private IP: internal IP address used by the master server to connect to the backup server. Use Network Type: The selected LVS mode. Here we use the direct routing mode.

Configure Redundancy
(1) Click "redundancy" and enable the backup server by clicking "enable", as shown in.

(2) configure the backup server, as shown in.

Redundant server public IP: IP address used by the backup server to connect to the Application Server (Real Server. Heartbeat interval: the polling time for heartbeat detection on the master server. Assume dead after: If the master server does not restore the heartbeat for a specified period of time, the server is declared invalid and takes over. Heartbeat runs on port: Uses heartbeat to detect the port used. Monitor Nic links for failures: checks the connection status of the NIC.

Configure virtual servers

(1) Click "virtual servers" to configure the server cluster and click "add" to add a virtual server, as shown in.

(2) Select the virtual server you want to edit and click Edit to edit its attributes. After editing, click accept to save the information, as shown in.

(3) Fill in the following information in the pop-up interface:
Name: name of the virtual server.
Application port: Specifies the port of the target application service.
Protocol: The network protocol of the target application service, TCP or UDP.
Virtual IP Address: defines the virtual IP address used by the target application.
Virtual IP network mask: defines the subnet mask of the virtual IP used by the target application.
Firewall MARK: when the target application uses multiple IP Ports, use iptable to set the firewall flag.
Device: the name of the NIC that the virtual IP is attached.
Re-entry time: when a real server fault is found, LVS route checks the server at intervals.
Server Timeout: After LVS route sends a command to the Real Server, if no response is received after this time, the server is deemed to be faulty.
Quiesce server: Once a Real Server is added or restored, all load queue records are "0" and allocated again.
Load monitoring tool: In Real Server, the system load is obtained through ruptime or RUP commands to combine the Scheduling Algorithm For scheduling computing.
Scheduling: scheduling algorithm used by this virtual server.
Persistence: the persistence time of long connections on the same client.
Persistence network mask: The subnet mask (Network Segment) of persistent connections.


Note: The load monitoring tool requires the Real Server to have ruptime or RUP installed, and requires the LVS server to be able to use the root account to connect to the Real Server through SSH without requiring a password. Scheduling includes the following eight Scheduling Policies: round-robin scheduling: Round Robin policy, which round robin The Real Server one by one during IP distribution. Weighted Round-Robin Scheduling: Weighted Round Robin policy, which is used with weights for Round Robin policy calculation. Least-connection: the minimum connection priority policy that distributes new IP requests to real servers with short access queues. Weighted Least-connections: weighted least-connection priority policy, which is used with the weight value to calculate the least-connection priority policy. Locality-based least-connection scheduling: lblcs.
If the recently used server is available and is not overloaded (less than half of the system pressure), the request is sent to the server. Otherwise, the minimum connection priority policy is used. This policy mainly targets cache gateway servers. Locality-based least connections with replication scheduling: similar to lblcs, a replication scheduling policy is added based on lblcs to ensure that "popular" websites are cached on the same gateway server whenever possible, this further avoids saving the same cache information among multiple servers. This policy mainly targets the cache gateway server. Destination hashing scheduling: determines the target server by calculating the hash of the target address. This policy mainly targets cache gateway servers. Source hashing scheduling: determines the target server through hash calculation of the source address. This policy mainly targets cache gateway servers.

(4) Click "Real Server" and click "add" to add two real servers, as shown in.

(5) edit the first "Real Server", as shown in.

(6) edit the second real server in the same way

(7) activate two real servers



(8) Click "monitoring scripts" and configure LVS to check the rules applied to the target in the RealServer, as shown in.

Sending program: determines the availability of application services in real server through a program (it cannot be used together with send ).
Send: send commands directly through the port specified in virtual server.
Failed CT: the return value after sending program or send. If the returned value matches, the application service runs normally in the current real server.
Treat regular CT string as a regular expression: Compares the value in regular CT with the return value as a regular expression.
Note: This function is similar to the "cluster script" in RHCS. It is mainly used to determine whether the target service in real server is running normally. if the service is found to be invalid, the real server is automatically isolated from the virtual server.
(9) do not forget to activate the virtual server after the configuration is complete.


6. Check the configuration. The LVS service configuration is complete. You can view the generated configuration file/etc/sysconfig/HA/LVS. Cf.
Serial_no = 15
Primary = 192.168.20.101
Primary_private = 192.168.30.101
Service = LVS backup_active = 1
Backup = 192.168.20.102
Backup_private = 192.168.30.102
Heartbeat = 1
Heartbeat_port = 539
Keepalive = 6
Deadtime = 18
Network = direct
Debug_level = none
Monitor_links = 0
Syncdaemon = 0
Virtual 30wish_web {
Active = 1
Address = 192.168.20.100 eth0: 1
Vip_nmask = 255.255.255.0
Port = 80
Send = "Get/HTTP/1.0 \ r \ n"
Ct = "HTTP"
Use_regex = 0
Load_monitor = none
Schedlc = wlc
Protocol = TCP
Timeout = 6
Reentry = 15
Quiesce_server = 0
Server 30wish_web1 {
Address = 192.168.20.151
Active = 1
Weight = 1
}
Server 30wish_web2 {
Address = 192.168.20.152
Active = 1
Weight = 1
}}
7. Synchronize the configuration file to lvs2
Copy/etc/sysconfig/HA/LVS. conf to lvs2:
[Root @ lvs1 ~] # SCP/etc/sysconfig/HA/LVS. Cf lvs2:/etc/sysconfig/HA/LVS. cf
8. The service name corresponding to the RHEL 5 LVS cluster start/stop operation RHEL 5 LVS is pulse. We can stop the operation and start the LVS service.
[Root @ lvs1 ~] # Service pulse start starting pulse: [OK] set to auto-start upon startup
[Root @ lvs1 ~] # Chkconfig -- level 2345 pulse on

1.3 LVS client Configuration
Install the HTTPd service on two real servers respectively.
[Root @ 30wish_web1 ~] # Yum install httpd
[Root @ 30wish_web2 ~] # Yum install httpd

Compile a test webpage
[Root @ 30wish_web1 ~] # Echo "this is the 30wish_web1 111">/var/www/html/index.html
[Root @ 30wish_web2 ~] # Echo "this is the 30wish_web2 222">/var/www/html/index.html

Delete the welcome page of Apache
[Root @ 30wish_web1 ~] # Rm-RF/etc/httpd/CONF. d/welcome. conf [root @ 30wish_web2 ~] # Rm-RF/etc/httpd/CONF. d/welcome. conf enable HTTPd service [root @ 30wish_web1 ~] # Service httpd start [root @ 30wish_web2 ~] # Service httpd start

Modify firewall policies
[Root @ 30wish_web1 ~] # Iptables-T Nat-A prerouting-p tcp-D 192.168.20.100 -- dport 80-J redirect
[Root @ 30wish_web2 ~] # Iptables-T Nat-A prerouting-p tcp-D 192.168.20.100 -- dport 80-J redirect

1.4 RHEL 5 LVS cluster performance test
1. Restart the RHEL 5 LVS service
[root @ lvs1 ~] # Service pulse restart
shutting down pulse: [OK]
starting pulse: [OK]
[root @ lvs2 ~] # Service pulse restart
shutting down pulse: [OK]
starting pulse: [OK]
test LVS
2. Check the LVS status on the master node. We can see that the LVS cluster has two web servers, in addition, the first server has 19 sessions, and the second server has 20 sessions.
3. Test the browser
Click Refresh again, check whether the server is switched to the second Web server.
4. Test the fault of the first server. we disable the HTTPd service of the first web server, simulate failure of the first server
[root @ 30wish_web1 ~] # Service httpd stop
check the status on the LVS master server using ipvsadm and find that each web server has been removed from the cluster.
5. simulate that the first server is restored to normal. If the second server fails, test the HTTPd service of the first web server, the httpd service of the second Web server is disabled to simulate the failure of the second server
[root @ 30wish_web1 ~] # Service httpd start
[root @ 30wish_web2 ~] # Service httpd stop
Go To The LVS master server and use ipvsadm to check the LVS status. Then, we found that the first recovered web server has been added to the cluster, remove the second invalid web server from the cluster.
6. Simulate LVS primary server failure test. we disable the LVS service on the LVS primary server, simulate LVS primary server failure
[root @ lvs1 ~] # Service pulse stop
after several seconds, go to the LVS backup server and use ipvsadm to check the LVS status. It is found that the LVS cluster has been moved to the LVS backup server.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.