Lvs piranha nginx tomcat (DR) configuration, lvspiranha

Source: Internet
Author: User

Lvs piranha nginx tomcat (DR) configuration, lvspiranha

Lab platform: CentOS release5.2 (Final)

Objective: To quickly grasp and understand the Piranha solution.

Structure:

LVS-ACTIVE: 10.10.42.201.

LVS-BACKUP: 10.10.42.202.

LVS-VIP: 10.10.42.22.

Realsever: 10.10.42.203, 10.10.42.205

 

I. Brief introduction to the Piranha solution.

1. Advantages of the Piranha solution:

1.1.1 simple and efficient configuration: You can easily configure an lvs. conf configuration file (keepalived-like solution .)

1.1.2WEB configuration interface: WEB configuration is very attractive for those who do not understand the LVS configuration.

1.1.3 complete functions:

Heartbeat and HA (pulse, send_arp) of the Master/Slave LVS (Load Balancer)

Heartbeat (nanny) of the process service between LoadBalancer and Real Server)

IPVS function (lvsd)

Ipvsadm)

 

2. principles and structure of the Piranha solution:

The Piranha solution is a set of load balancing high availability solutions based on LVS.

LVS runs on a computer with similar configurations:

Active LVS Router ),

Backup LVS Router ).

 

The active LVS Router service has two roles:

* Load the server to the Real Server.

* Check whether the services provided by the Real Server are normal.

The backup LVS Router is used to monitor the active LVS Router. When the active LVS Router fails, the backup LVS Router takes over.

 

Pulse: the Pulse process runs on the active LVS Router and the backup LVS Router.

On the backup LVS Router, pulse sends a heartbeat to the public interface of the active LVS Router to check whether the active LVS Router is normal.

On the active LVS Router, pulse starts the lvs process and responds to the heartbeat from the backup LVS Router.

 

Lvsd: The lvs process calls the ipvsadm tool to configure and maintain the IPVS route table, and starts a nanny process for each virtual service on the Real Server.

 

Nanny: each nanny process checks the virtual service status on the Real Server and notifies the lvs process of the fault. If a fault is found, The lvs process notifies ipvsadm to delete the node in the IPVS route table.

 

Send_arp: If the backup LVS Router does not receive a response from the active LVS Router,

It will call send_arp to distribute the virtual IP address to the public network interface of the backup LVS Router.

A command is sent on the public network interface and LAN interface to disable the LVS process on the active lvs Router. Start your own lvs process to schedule client requests.

 

3. Install the basic package of the Piranha solution:

A, wget http://mirrors.163.com/.help/CentOS6-Base-163.repo-O/etc/yum. repos. d/CentOS-Base.repo

B. yum makecache

C. yum-y update

D. yum install unzip sadmmodcluster piranha system-config-cluster php-cli php-common

 

4. configuration file introduction:

/Etc/sysconfig/ha/lvs. cf // This file is written by the configuration file configured on the http: // ip: 3636 web interface.

[Root @ slave1 ~] # Cat/etc/sysconfig/ha/lvs. cf

Serial_no = 83

Primary = 10.10.42.201

Service = lvs

Backup_active = 1

Backup = 10.10.42.202

Heartbeat = 1

Heartbeat_port = 539

Keepalive = 8

Deadtime = 9

Network = direct

Debug_level = NONE

Monitor_links = 0

Syncdaemon = 0

Tcp_timeout = 5

Tcpfin_timeout = 6

Udp_timeout = 7

Virtual MpmWeb {

Active = 1

Address = 10.10.42.22 eth0: 1

Vip_nmask = 255.255.255.0

Port = 80

Perpersistent = 3600

Pmask = 255.255.255.0

Use_regex = 0

Load_monitor = none

Scheduler = rr

Protocol = tcp

Timeout = 30

Reentry = 30

Quiesce_server = 0

Server MpmWeb_s01 {

Address = 10.10.42.203

Active = 1

Port = 8080

Weight = 1

}

Server MpmWeb_s02 {

Address = 10.10.42.205

Active = 1

Port = 8080

Weight = 1

}

}

/Etc/init. d/piranha-gui start // The WEB configuration interface for starting the piranha service.

/Etc/init. d/pulse // start the piranha service to read/etc/sysconfig/ha/lvs. cf.

 

Ii. Piranha Configuration

Configure the master LVS server.

# Vi/etc/sysctl. conf find the following line // enable data forwarding.

Net. ipv4.ip _ forward = 0 change 0 to 1, net. ipv4.ip _ forward = 1

Run the following command: sysctl-p

 

Configure the Piranha Service on the WEB page.

#/Etc/init. d/piranha-gui start // start the Piranha service.

#/Usr/sbin/piranha-passwd // set the password. Set the logon password for your piranha Service WEB configuration.

Http: // 10.10.42.201: 3636/enter the username piranha and the password you just set to log in.

A) CONTROL/MONITORING

 

B) GLOBAL SETTINGS

Primary server public IP: IP address used by the master Server to connect to the Application server (Real Server.

Primary server private IP: The heartbeat IP address used by the master server to connect to the backup server.

Use Network Type: The selected LVS mode.

 

C) REDUNDANCY configuration

Redundant server public ip: public IP address of the Backup server

Redundant server public IP: IP address used by the backup Server to connect to the Application server (Real Server.

Redundant server private IP: The heartbeat IP address used by the backup server to connect to the master server.

Heartbeat interval: the polling time for Heartbeat detection on the master server.

Assume dead after: If the master server does not restore the heartbeat for a specified period of time, the server is declared invalid and takes over.

Heartbeat runs on port: Uses Heartbeat to detect the port used.

Monitor NIC links for failures: checks the connection status of the NIC.

D) Configure Marshal SERVERS

Name: Name of the virtual server.

Application port: Specifies the port of the target Application service.

Protocol: The network Protocol of the target application service, TCP or UDP.

Virtual IP Address: defines the Virtual IP Address used by the target application.

Virtual IP Network Mask: defines the subnet Mask of the Virtual IP used by the target application.

Firewall Mark: when the target application uses multiple IP Ports, use IPTABLE to set the Firewall flag.

Device: the name of the NIC that the virtual IP is attached.

Re-entry Time: when a Real Server fault is found, LVS Route checks the Server at intervals.

Server timeout: After LVS Route sends a command to the Real Server, if no response is received after this time, the Server is deemed to be faulty.

Quiesce server: Once a Real Server is added or restored, all load queue records are "0" and allocated again.

Load monitoring tool: In Real Server, the system Load is obtained through ruptime or rup commands, and Scheduling computing is performed based on the Scheduling algorithm.

Scheduling: Scheduling algorithm used by this virtual server.

Persistence: the Persistence time of long connections on the same client.

Persistence Network Mask: The subnet Mask (Network Segment) of persistent connections.

The Load monitoring tool requires the Real Server to have ruptime or rup installed, and requires the LVS Server to be able to connect to the Real Server through SSH without a password.

Scheduling includes the following eight Scheduling Policies:

Round-Robin Scheduling: Round Robin policy. Real servers are Round Robin one by one during IP address distribution.

Weighted Round-Robin Scheduling: Weighted Round Robin policy, which is used with weights for Round Robin policy calculation.

Least-Connection: the minimum Connection priority policy that distributes new IP requests to Real servers with short access queues.

Weighted Least-Connections: Weighted Least-connection priority policy, which is used with the weight value to calculate the Least-connection priority policy.

Locality-Based Least-Connection Scheduling: LBLCS. Locate the recently used server Based on the target IP address. If the server is available and is not overloaded (less than half of the system pressure ), send the request to the server. Otherwise, the minimum connection priority policy is used. This policy mainly targets Cache gateway servers.

Locality-Based Least Connections with Replication Scheduling: similar to LBLCS, a Replication Scheduling policy is added Based on LBLCS to ensure that "popular" websites are cached on the same gateway server whenever possible, this further avoids saving the same Cache information on multiple servers. This policy mainly targets the Cache gateway server.

Destination Hashing Scheduling: determines the target server by calculating the Hash of the target address. This policy mainly targets Cache gateway servers.

Source Hashing Scheduling: determines the target server through Hash calculation of the Source address. This policy mainly targets Cache gateway servers.

 

E) Add real server

Name: Set the Name of The Real Server.

Address: Set the IP Address of the Real Server.

Weight: Set the Weight of this Real Server. When the performance of each Real Server is not the same, you can set a high-performance Server to get a higher Weight.

 

E) monitoring scripts Configuration

Sending Program: determines the availability of application services in Real Server through a Program (it cannot be used together with Send ).

Send: Send commands directly through the port specified in virtual server.

Failed CT: the return value after Sending Program or Send. If the returned value matches, the application service runs normally in the current Real Server.

Treat regular CT string as a regular expression: Compares the value in regular CT with the return value as a regular expression.

Note:

The functions here are mainly used to determine whether the target service in the Real Server is running normally. if the service is found to be invalid, the Real SERVER is automatically isolated from the VIRTUAL Server.

 

3. Set the LVS service to start on the [Virtual Server]

/Etc/init. d/piranha-gui start

/Etc/init. d/pulse start

The following information is displayed after successful execution:

Add to enable startup:

Chkconfig-level 345 piranha-gui on

Chkconfig-level 345 pulse on

 

Iv. RealServer System Configuration[Real serverEnd]

#! /Bin/bash

# RealServer service script, direct routing

MpmWeb_VIP = 10.10.42.22

Start (){

Ifconfig lo: 0 $ MpmWeb_VIP netmask 255.255.255.255 broadcast $ MpmWeb_VIP

/Sbin/route add-host $ MpmWeb_VIP dev lo: 0

Echo "1">/proc/sys/net/ipv4/conf/lo/arp_ignore

Echo "2">/proc/sys/net/ipv4/conf/lo/arp_announce

Echo "1">/proc/sys/net/ipv4/conf/all/arp_ignore

Echo "2">/proc/sys/net/ipv4/conf/all/arp_announce

Sysctl-p>/dev/null 2> & 1

Echo "RealServer Start OK [lvs_dr]"

}

 

Stop (){

Echo "0">/proc/sys/net/ipv4/conf/lo/arp_ignore

Echo "0">/proc/sys/net/ipv4/conf/lo/arp_announce

Echo "0">/proc/sys/net/ipv4/conf/all/arp_ignore

Echo "0">/proc/sys/net/ipv4/conf/all/arp_announce

/Sbin/ifconfig lo: 0 down

/Sbin/route del-host $ MpmWeb_VIP

Sysctl-p>/dev/null 2> & 1

Echo "RealServer Stoped [lvs_dr]"

}

 

Restart (){

Stop

Start

}

 

Case $1 in

 

Start)

Start

;;

Stop)

Stop

;;

Restart)

Restart

;;

Status)

/Sbin/ifconfig

;;

*)

Echo "Usage: $0 {start | stop | restart | status }"

Exit 1

Esac

 

5. Configure the realserver front-end proxy

Worker_processes 1;

Error_log logs/error. log notice;

Events {

Worker_connections 1024;

}

 

Http {

Include mime. types;

Default_type application/octet-stream;

Sendfile on;

Keepalive_timeout 65;

Gzip on;

 

Upstream MpmWeb_cluster {

Ip_hash;

Server 10.10.42.203: 8080;

Server 10.10.42.205: 8080;

}

 

Server {

Listen 80;

Server_name 10.10.42.22;

Location /{

Root html;

Index index.html index.htm;

Proxy_pass http: // MpmWeb_cluster;

Proxy_set_header Host $ http_host;

Proxy_set_header X-Forward-For $ remote_addr;

}

Error_page 404/404 .html;

Error_page 500 502 503 x.html;

Location =/50x.html {

Root html;

}

 

# Pass the PHP scripts to FastCGI server listening on Fig: 9000

Location ~ \. Php $ {

Root html;

Fastcgi_pass 127.0.0.1: 9000;

Fastcgi_index index. php;

Fastcgi_param SCRIPT_FILENAME/scripts $ fastcgi_script_name;

Include fastcgi_params;

}

}

}

 

Vi. Test

Open IE: http: // 10.10.42.22/MpmWeb/and refresh it continuously. If you can access it, it indicates that it is successful.

 

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.