Linux operating system cluster principle and practical experience

Source: Internet
Author: User
Linux operating system cluster principle and practical experience-Linux Enterprise Application-Linux server application information. The following is a detailed description. Cluster 1 and Cluster on Linux mainly solve the following problems:
High reliability (HA)
With the cluster management software, when the master server fails, the backup server can automatically take over the work of the master server and switch over in time to achieve uninterrupted services for users.
High-performance computing (HP)
That is to say, we make full use of the resources of every computer in the cluster to implement parallel processing of complex operations, which are usually used in scientific computing fields, such as genetic analysis and chemical analysis.
Load Balancing
That is, the load pressure is allocated to each computer in the cluster according to a certain algorithm to reduce the pressure on the master server and reduce the hardware and software requirements on the master server.
The Linux-based cluster solution can be said to be in full bloom. In practical applications, the most common situation is to use clusters to solve load balancing problems, such as providing WWW services. This section describes how to use LVS (Linux Virtial Server) to implement a practical WWW Load Balancing cluster system.
Introduction to LVS
LVS is an excellent Cluster solution initiated and led by Dr. Zhang Wenyu. Many commercial Cluster products, such as RedHat's Piranha and TurboLinux's Turbo Cluster, all are LVS-based core code. In practical applications, LVS has been deployed in a large number of ways. For more information, see http://www.linuxvirtualserver.org/deployment.html. For more information about how Linux LVS works, see http://www.linuxvirtualserver.org.
Three LVS configuration instance
Use Linux LVS to balance the load of the WWW and Telnet services. The Telnet cluster service is available only for testing convenience.
LVS provides three Load Balancing Methods: NAT (Network Address Translation), DR (Direct Routing), and IP Tunneling. Among them, the most common method is DR, so here only describes DR (Direct Routing) LVS load balancing. For test convenience, four machines are in the same CIDR Block and connected through a switch or hub. In practical applications, it is best to place the virtual server vs1 and the Real Server rs1 and rs2 on different network segments, which improves the performance and enhances the security of the entire cluster system.
Server hardware and software configuration
First of all, although the test environment in this article uses three servers with the same configuration, LVS does not require uniform server specifications in the cluster. On the contrary, it can be based on different server configurations and load conditions, adjust the load distribution policy to make full use of each server in the cluster environment.
Among the three servers, vs1 acts as a virtual server (namely, a server Load balancer) and forwards user access requests to rs1 and rs2 in the cluster, which are then processed by rs1 and rs2. The client is the client testing machine and can be any operating system. The operating system and network configurations of the four servers are as follows:

Vs1: RedHat 6.2, Kernel 2.2.19
Vs1: eth0 192.168.0.1
Vs1: eth0: 101 192.168.0.101
Rs1: RedHat 6.2, Kernel 2.2.14
Rs1: eth0 192.168.0.3
Rs1: dummy0 192.168.0.101
Rs2: RedHat 6.2, Kernel 2.2.14
Rs2: eth0 192.168.0.4
Rs2: dummy0 192.168.0.101
Client: Windows 2000
Client: eth0 192.168.0.200
192.168.0.101 is the IP address that allows users to access.
Cluster configuration of the Virtual Server
Most cluster configurations are performed on the virtual server vs1. The following steps are required:
Recompile the kernel.
Download the latest Linux kernel version 2.2.19 at http://www.kernel.org.
Download the LVS kernel patch at http://www.linuxvirtualserver.org/software/~s-1.0.6-2.2.19.tar.gz. Note that if the Linux kernel you are using is not version 2.2.19, download the LVS kernel patch of the corresponding version. Decompress ipvs-1.0.6-2.2.19.tar.gz and place it in the/usr/src/linux directory.
Then, install the kernel as follows:

[Root @ vs2/root] # cd/usr/src/linux
[Root @ vs2 linux] # patch-p1 <ipvs-1.0.6-2.2.19/ipvs-1.0.6-2.2.19.
Patch
The following describes how to reconfigure and compile the Linux kernel. Pay special attention to the following options:

1 Code maturity level options --->
*
  • Prompt for development and/or incomplete code/drivers
    2 Networking part:
  • Kernel/User netlink socket
  • Routing messages
    <*> Netlink device emulation
    *
  • Network firewils
  • Socket Filtering
    <*> Unix domain sockets
    *
  • TCP/IP networking
  • IP: multicasting
  • IP: advanced router
    [] IP: policy routing
    [] IP: equal cost multipath
    [] IP: use TOS value as routing key
    [] IP: verbose route monitoring
    [] IP: large routing tables
    [] IP: kernel level autoconfiguration
    *
  • IP: firewalling
    [] IP: firewall packet netlink device
    *
  • IP: transparent proxy support
    *
  • IP: masquerading
    --- Protocol-specific masquerading support will be built as modules.
    *
  • IP: ICMP masquerading
    --- Protocol-specific masquerading support will be built as modules.
    *
  • IP: masquerading special modules support
    * IP: ipautofw masq support (EXPERIMENTAL) (NEW)
    * IP: ipportfw masq support (EXPERIMENTAL) (NEW)
    * IP: ip fwmark masq-forwarding support (EXPERIMENTAL) (NEW)
    *
  • IP: masquerading virtual server support (EXPERIMENTAL) (NEW)
  • IP Virtual Server debugging (NEW) <-- it is best to select this option to observe the debugging information of LVS.
    * (12) IP masquerading VS table size (the Nth power of 2) (NEW)
    * IPVS: round-robin scheduling (NEW)
    * IPVS: weighted round-robin scheduling (NEW)
    * S: least-connection scheduling (NEW)
    * IPVS: weighted least-connection scheduling (NEW)
    * S: locality-based least-connection scheduling (NEW)
    * IPVS: locality-based least-connection with replication scheduling
    (NEW)
    *
  • IP: optimize as router not host
    * IP: tunneling
    IP: GRE tunnels over IP
  • IP: broadcast GRE over IP
  • IP: multicast routing
  • PIM-SM version 1 support
  • PIM-SM version 2 support
    *
  • IP: aliasing support
    [] IP: ARP daemon support (EXPERIMENTAL)
    *
  • IP: TCP syncookie support (not enabled per default)
    --- (It is safe to leave these untouched)
    <> IP: Reverse ARP
  • IP: Allow large windows (not recommended if <16 Mb of memory)
    <> The IPv6 protocol (EXPERIMENTAL)
    In the preceding example, the asterisk (*) is required. The general kernel compilation process will not be repeated.
    Note: If you are using the kernel of RedHat or the kernel version downloaded from RedHat, you have installed the LVS patch in advance. This can be determined by checking whether there are several files starting with S in the/usr/src/linux/net/directory: If yes, the patch has been installed.
    Compile the LVS configuration file. The configuration file in the instance is as follows:

    # Lvs_dr.conf (C) Joseph Mack mack@ncifcrf.gov
    LVS_TYPE = VS_DR
    INITIAL_STATE = on
    VIP = eth0: 101 192.168.0.101 255.255.255.0 192.168.0.0
    DIRECTOR_INSIDEIP = eth0 192.168.0.1 192.168.0.0 255.255.255.0 192.168.0. 255
    SERVICE = t telnet rr rs1: telnet rs2: telnet
    SERVICE = t www rr rs1: www rs2: www
    SERVER_VIP_DEVICE = dummy0
    SERVER_NET_DEVICE = eth0
    # ---------- End lvs_dr.conf ------------------------------------
    Place the file in the/etc/lvs directory.
    Use the LVS configuration script to generate the lvs. conf file. The configuration script can be downloaded from the http://www.linuxvirtualserver.org/Joseph.Mack/configure-lvs_0.8.tar.gz, and contains the use of the script configure in the ipvs-1.0.6-2.2.19.tar.gz package:

    [Root @ vs2 lvs] # configure lvs. conf
    This will generate several configuration files. Here we only use the rc. lvs_dr file. Modify/etc/rc. d/init. d/rc. local and add the following lines:

    Echo 1>/proc/sys/net/ipv4/ip_forward
    Echo 1>/proc/sys/net/ipv4/ip_always_defrag
    # Display up to debugging information
    Echo 10>/proc/sys/net/ipv4/vs/debug_level
    Configure the NFS service. This step is only for convenience of management, not a necessary step. If the configuration file lvs. conf is placed in the/etc/lvs directory, the content of the/etc/exports file is:

    /Etc/lvs ro (rs1, rs2)
    Run the exportfs command to output the directory:

    [Root @ vs2 lvs] # exportfs
    If you have any trouble, try:

    [Root @ vs2 lvs] #/etc/rc. d/init. d/nfs restart
    [Root @ vs2 lvs] # exportfs
    In this way, each real server can obtain rc through NFS. the lvs_dr file facilitates cluster configuration: You modify lvs each time. the configuration options in conf can be reflected in the corresponding directories of rs1 and rs2. Modify/etc/syslogd. conf and add the following line: kern. */var/log/kernel_log. In this way, some debugging information of LVS will be written into the/var/log/kernel_log file.
    Real Server Configuration
    The configuration of Real Server is relatively simple, mainly including the following:
    Configure telnet and WWW services. The telnet service does not require special attention, but for the www Service, you need to modify the httpd. conf file so that apache can listen on the IP address of the virtual server, as shown below:

    Listen 192.168.0.101: 80
    Disable the arp request response capability of dummy0 on Real Server. This is required. For more information, see ARP problem in LVS/TUN and LVS/DR. There are multiple ways to disable the arp response of dummy0. The simplest way is to modify/etc/rc. d/rc. add the following lines to the local file:

    Echo 1>/proc/sys/net/ipv4/conf/all/hidden
    Ifconfig dummy0 up
    Ifconfig dummy0 192.168.0.101 netmask 255.255.255.0 broadcast 192.168. 0.0 up
    Echo 1>/proc/sys/net/ipv4/conf/dummy0/hidden
    Modify/etc/rc. d/rc. local again and add the following line: (it can be combined with step 2)
    Echo 1>/proc/sys/net/ipv4/ip_forward
    LVS test
    After completing the preceding configuration steps, you can test LVS as follows:
    Run/etc/lvs/rc. lvs_dr on vs1, rs1, and rs2. Note that the/etc/lvs directory above rs1 and rs2 is output by vs2. If your NFS configuration fails, copy/etc/lvs/rc. lvs_dr on vs1 to rs1 and rs2, and run them separately. Make sure that apache on rs1 and rs2 has been started and telnet is allowed.
    Run telnet 192.168.0.101 from the client. If the following output is displayed after logon, the cluster has started to work. (Assume that you log on as a guest user)
    [Guest @ rs1 guest] $ ----------- indicates that you have logged on to rs1.
    Open another telnet window. After logon, the system prompt is changed:
    [Guest @ rs2 guest] $ ----------- indicates that you have logged on to rs2.
    Then run the following command on vs2:

    [Root @ vs2/root] ipvsadm
    The running result should be:

    IP Virtual Server version 1.0.6 (size = 4096)
    Prot LocalAddress: Port sched1_flags
    -> RemoteAddress: Port Forward Weight ActiveConn InActConn
    TCP 192.168.0.101: telnet rr
    -> Rs2: telnet Route 1 1 0
    -> Rs1: telnet Route 1 1 0
    TCP 192.168.0.101: www rr
    -> Rs2: www Route 1 0 0
    -> Rs1: www Route 1 0 0
    So far, the telnet LVS has been verified to be normal. Then test whether WWW is normal: Check http: // 192.168.0.101/in your browser. Is there any change? To better differentiate the response from the Real Server, you can set the following test page (test.html) in rs1,rs2.pdf ):

    I am real server #1 or #2
    Refresh the page several times (http: // 192.168.0.101/test.html). If you see "I am real server #1" and "I am real server #2, it indicates that the www LVS system is working properly.
    However, due to the caching mechanism of Internet ipve or Netscape, you may always be able to see only one of them. However, we can see from ipvsadm that the page request has been allocated to two Real servers, as shown below:

    IP Virtual Server version 1.0.6 (size = 4096)
    Prot LocalAddress: Port sched1_flags
    -> RemoteAddress: Port Forward Weight ActiveConn InActConn
    TCP 192.168.0.101: telnet rr
    -> Rs2: telnet Route 1 0 0
    -> Rs1: telnet Route 1 0 0
    TCP 192.168.0.101: www rr
    -> Rs2: www Route 1 0 5
    -> Rs1: www Route 1 0 4
    Alternatively, you can use linux lynx as the test client to achieve better results. Run the following command:

    [Root @ client/root] while true; do lynx-dump
    Http: // 10.64.1.56/test.html; sleep 1; done
    In this way, "I am realserver #1" and "I am realserver #2" appear alternately every second, clearly indicating that the response comes from two different Real servers.
    This article
  • Related Article

    Contact Us

    The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

    If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

    A Free Trial That Lets You Build Big!

    Start building with 50+ products and up to 12 months usage for Elastic Compute Service

    • Sales Support

      1 on 1 presale consultation

    • After-Sales Support

      24/7 Technical Support 6 Free Tickets per Quarter Faster Response

    • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.