Use Apache and Tomcat to build clusters (load Balancing) _linux

Source: Internet
Author: User
Tags mixed win32 node server

I. The concept of clustering and load balancing

(i) the concept of clusters

A cluster (Cluster) is a loosely coupled collection of compute nodes consisting of two or more node machines (servers), providing a single customer view of network services or applications (including databases, Web services, and file services), while providing a fault recovery capability close to a fault-tolerant machine. The cluster system is generally connected through the corresponding hardware and software through two or more node server systems, and each cluster node is a stand-alone server running its own process. These processes can communicate with each other, and to network clients is like forming a single system that provides users with applications, system resources, and data. In addition to serving as a single system, cluster systems also have the ability to recover server-level failures. Cluster systems can also increase the processing power of servers from within by continuing to increase the number of servers in the cluster, providing inherent reliability and availability through system-level redundancy.
(ii) Classification of clusters
1. High Performance Computing Science cluster:
IA cluster System for the purpose of solving complex scientific computational problems. Is the basis for parallel computing, instead of using a dedicated parallel supercomputer made up of 10 to tens of thousands of separate processors, it uses a set of 1/2/4 CPUs that are linked through a high-speed connection, and communicates on the public messaging layer to run parallel applications. The computing cluster has the same processing power as the real super parallel machine, and has excellent price/performance ratio.
2. Load Balanced cluster:
Load Balancing cluster provides a more practical system for enterprise requirements. The system enables the load flow of each node to be distributed as evenly as possible in the server cluster. The load requires a balanced calculation of the application processing port load or network traffic load. Such a system is ideal for a large number of users running the same set of applications. Each node can handle a portion of the load and can dynamically allocate the load between nodes to achieve balance. This is also true for network traffic. Typically, Web server applications receive a large amount of network traffic and cannot be processed quickly, which requires that traffic be sent to other nodes. The load balancing algorithm can also be optimized according to the different resources available to each node or the special environment of the network.
3. High Availability cluster:
In order to ensure the high availability of the whole cluster service, the fault tolerance of computing hardware and software is considered. If a node in a high availability cluster fails, it is replaced by another node. The entire system environment is consistent with the user.

In the actual application of the cluster system, these three basic types often occur mixed and mixed.

(iii) Typical cluster

Scientific Computing cluster:
1, Beowulf
When it comes to Linux clusters, the first reflection of many people is Beowulf. That is the most famous Linux science software cluster system. In fact, it is a generic term for a common package that works on the Linux kernel. These include the popular software messaging APIs, such as "Messaging Interface" (MPI) or "Parallel virtual machine" (PVM), modifications to the Linux kernel to allow the integration of several Ethernet interfaces, high-performance network drives, changes to the virtual Memory manager, and distributed interprocess communication (DIPC) services. The public global process identity space allows access to any process from any node using the DIPC mechanism.
2, MOSIX
Beowulf is similar to a cluster-enabled plug-in software installed to the system, providing an application-level clustering capability. And Mosix is completely modified Linux kernel, from the system level to provide the cluster capability, it is completely transparent to the application, the original application, can not be changed, you can run the Mosix system on the normal. Any node in the cluster can be freely added and removed to replace the work of other nodes, or to augment the system. The MOSIX uses adaptive process load balancing and memory boot algorithms to maximize overall performance. Application processes can migrate between nodes to take advantage of the best resources, similar to symmetric multiprocessor systems that can switch applications between processors. Because Mosix implements clustering by modifying the kernel, there are compatibility issues and some system-level applications will not function properly.

Load balancing/High availability clusters

3, LVS (Linux Virtual Server)
This is a project presided over by the people of Taiwan.
It is a load balanced/high-availability cluster, mainly for network applications such as news services, online banking, E-commerce, etc., for large business volumes.
LVS is based on a cluster consisting of a master server (usually dual-computer) (director) and a number of real servers (real-server). Real-server is responsible for the actual provision of services, the master server according to the specified scheduling algorithm to control the Real-server. The structure of the cluster is transparent to the user, the client only communicates with a single IP (the cluster system's virtual IP), that is, from the client's point of view, there is only a single server.
N54537real-server can provide many services, such as FTP, HTTP, DNS, Telnet, NNTP, SMTP, and so on. The master server is responsible for controlling the real-server. When a client makes a service request to LVS, director specifies a real-server to answer the request with a specific scheduling algorithm, and the client communicates only with the IP (that is, the virtual IP,VIP) of the load balancer.

Other clusters:

Now the cluster system is a variety of, most of the OS developers, server developers have provided a system-level cluster products, the most typical of all types of dual-machine systems, as well as various types of scientific research institutions provided by the cluster system. and the application-level cluster systems provided by various software developers, such as database clusters, application server clusters, Web server clusters, mail clusters, and so on.

(iv) Load balancing

1. Concept

With the increase of traffic volume and the rapid growth of traffic volume and data flow, the processing capacity and computing strength of the existing network have been increased correspondingly, so that single server equipment cannot afford it at all. In this case, if you throw away an existing device to do a lot of hardware upgrades, this will result in the waste of existing resources and, if faced with the next increase in business volume, this will lead to a high cost of hardware upgrades again, and even the most performance-enhancing equipment will not meet the current business growth requirements.

A cheap and effective transparent approach to this situation to extend the bandwidth of existing network devices and servers, increase throughput, enhance network data processing capabilities, and increase network flexibility and availability is a technology that loads balance (load Balance).

2, Characteristics and classification

Load balancing (server load Balance) is generally used to improve the overall processing capacity of the server, and improve reliability, availability, maintainability, the ultimate goal is to speed up the response of the server, so as to improve the user experience.

Load balancing is structurally divided into local load balancing (the native server load Balance) and geographical load balancing (global server load Balance), or load balancing for local server groups, The other refers to the load balancing between different locations, different networks and server groups.

The regional load balance has the following characteristics:

(1) Solve the problem of network congestion, provide the nearest service, realize the geographical independence
(2) Provide better access quality to users
(3) Improve server response speed
(4) Improve the utilization efficiency of server and other resources
(5) Avoid data center single point of failure

3. The main application of load balancing technology

(1) DNS load Balancing the earliest load balancing technology is implemented through DNS, in DNS for multiple addresses to configure the same name, so the client query this name will get one of the addresses, so that different customers access to different servers, to achieve load balancing purposes. DNS load balancing is a simple and efficient method, but it does not differentiate between servers and the server's current state of operation.
(2) Proxy server load balancing using a proxy server, you can forward the request to the internal server, using this acceleration mode can obviously improve the access speed of static Web pages. However, it is also possible to consider a technique that uses a proxy server to forward requests evenly to multiple servers to achieve load balancing purposes.
(3) Address Translation Gateway load Balancing support load Balancing address translation gateway, you can map an external IP address to multiple internal IP address, each TCP connection request dynamic use of one of the internal address, to achieve load balancing purposes.
(4) Intra-protocol support load Balancing in addition to these three load balancing methods, some protocols internally support load balancing related functions, such as the redirection capability in HTTP protocols, and HTTP runs at the highest level of TCP connections.
(5) Nat load Balancing NAT (Network address translation) simply translates an IP address into another IP address, typically for unregistered internal addresses to be converted to legitimate, registered Internet IP addresses. Apply to solve the Internet IP address tension, do not want to let the network outside to know the internal network structure and so on occasions.
(6) Reverse proxy load Balancing normal proxy is a connection request that the proxy internal network user accesses to the server on the Internet, the client must specify a proxy server, and send a connection request that would have been sent directly to the server on the Internet to the proxy server for processing. A reverse proxy (Reverse proxy) means to accept a connection request on the Internet with a proxy server, then forward the request to a server on the internal network and return the results obtained from the server to the client requesting the connection on the Internet, At this point, the proxy server behaves as a server. The reverse proxy load balancing technique is to dynamically transfer the connection requests from the Internet to multiple servers on the internal network in a reverse proxy way to achieve load balancing.
(7) Hybrid load balancing in some large networks, due to the differences in hardware devices, sizes and services provided by multiple server groups, we can consider using the most appropriate load balancing method for each server group, Then again, load-balanced or clustered across the multiple server clusters to provide services to the outside world (that is, to treat these multiple server groups as a new server farm) for optimal performance. We call this approach a hybrid load-balancing. This approach is sometimes used in situations where the performance of a single equalization device cannot satisfy a large number of connection requests.

Second, build clusters and achieve load balance

(i) Pre-preparation

My system uses Windows XP Pro, and what I'm going to do is to use a Apache and multiple (here, two examples) tomcat to construct a cluster by JK Way. Here are the first things to prepare:

1, JDK, I use the version is jdk1.5.0_06, download the address is http://192.18.108.216/ECom/EComTicketServlet/ begind597a309654d73d910e051d73d539d5f/-2147483648/2438196255/1/852050/851882/2438196255/2ts+/westcoastfsend/ Jdk-1.5.0_13-oth-jpr/jdk-1.5.0_13-oth-jpr:3/jdk-1_5_0_13-windows-i586-p.exe
2, Apache, I use the version is 2.2.4, download the address is Http://apache.justdn.org/httpd/binaries/win32/apache_2.2.4-win32-x86-openssl-0.9.8d.msi
3, Tomcat, I use the version is 5.5 of the release version, here to note: Can not use the installed version, because a machine to install two of the same tomcat, it will be wrong. Download Address is Http://apache.mirror.phpchina.com/tomcat/tomcat-5/v5.5.25/bin/apache-tomcat-5.5.25.zip
4, JK, this JK version, there would have been two, but version 2 has been discarded, the current available JK version is 1.2.25. Each Apache version, there will be a specific JK and corresponding, so here to use the JK must also be developed for the apache-2.2.4 the only line. Its download address is http://www.apache.org/dist/tomcat/tomcat-connectors/jk/binaries/win32/jk-1.2.25/mod_jk-apache-2.2.4.so.

With these four things, we can start clustering.

(ii) installation

1, believe that need to read this article, the JDK installation will not be unfamiliar, here is not to repeat. Just need to remind: environment variables don't forget to configure.
2, install Apache also has no difficulty, is in the installation process to configure the domain name, Web site and administrator mailbox, such as information, this information can be prompted, and then modified to fill in, and then want to modify the words directly to the configuration file to change on the line. In addition to this place, make sure that the 80 ports on the machine are not occupied by other programs. As for the installation path, it is entirely up to the individual hobby. The other default is OK. After the installation is successful, the tray area in the lower right corner of the system will have an icon, we can start Apache through this, if that little red dot turns green, the service has been started normally (if the service does not start up, it is recommended that the configuration during the installation error, the proposed uninstall reinstall). If by default, the port is 80, then open the browser, enter: http://localhost/, you should be able to see the "It works" words. So you can move on to the next step.
3, decompression Tomcat, remember to do two copies. Here are two tomcat names: Tomcat-5.5.25_1 and Tomcat-5.5.25_2, in fact, the things in these two folders are exactly the same. But in order to cluster on the same machine, it is necessary to make sure that two tomcat runs without conflict on the port. Go to the tomcat-5.5.25_1/conf directory, open and modify Server.xml with a text editor, and change the Tomcat's default 8080 port to 8088 (there is no need to change it, I changed it because there are other tomcat on my machine that occupies 8080 ports). Then enter the tomcat-5.5.25_2/conf directory, the same will be 8080 modified, as to how many changes to how much, as long as the other programs do not occupy the port, should not be a problem. In this way, Tomcat is even installed.
4, JK This thing is a connection module, do not install, directly mod_jk-apache-2.2.4.so this file copied to the Apache installation directory under the Modules folder under the line.

In this way, the installation is complete and the configuration starts below.

(iii) configuration

This place is the key to building the cluster, and I will do my best to write the details.

1. Configure Tomcat

To prevent conflicts, enter the second Tomcat home directory, and then enter the Conf directory, open server.xml modify configuration. Mainly modify the port, I put all the port information, all in the original base plus 1000, that is, the original port is 8009, I changed to 9009. Of course, you don't have to be like me, just promise not to conflict ok! These configurations may be used in the configuration of Apache.

2. Configure Apache

(1) into the main Apache directory, and then into the Conf folder, with a text editor to open httpd.conf, at the end of the file with the following lines:

### Load MOD_JK Module
LoadModule Jk_module modules/mod_jk-apache-2.2.4.so

### Configuration Mod_jk
Jkworkersfile conf/workers.properties #加载集群中的workers
Jkmountfile conf/uriworkermap.properties #加载workers的请求处理分配文件
Jklogfile Logs/mod_jk.log #指定jk的日志输出文件
Jkloglevel warn #指定日志级别

(2) Do not change the directory, create a new file: Workers.properties, which is used to configure Web container information. The contents of the document are as follows:

# worker List
Worker.list=controller, status

#第一个server的配置, the server name is S1
#ajp13 port number, server.xml configuration under Tomcat, default 8009
worker.s1.port=8009
#tomcat的主机地址, if not for this machine, please fill in the IP address
Worker.s1.host=localhost
Worker.s1.type=ajp13
#server的加权比重, the higher the value, the more requests to be divided
Worker.s1.lbfactor=1

#第二个server的配置, the server name is S2
worker.s2.port=9009
Worker.s2.host=localhost
Worker.s2.type=ajp13
Worker.s2.lbfactor=1

#server名为controller, for load balancing
Worker.controller.type=lb
Worker.retries=3 #重试次数
#指定分担请求的server列表, separated by commas
Worker.controller.balanced_workers=s1,s2
#设置用于负载均衡的server的session可否共享 There are a lot of articles said set to 1 is OK, but I was set to 0 to be able to
Worker.controller.sticky_session=0
#worker. controller.sticky_session_force=1

Worker.status.type=status

(3) Do not change the directory, create a new file: Uriworkermap.properties, the contents of the file are as follows:

/*=controller #所有请求都由controller这个server处理
/jkstatus=status #所有包含jkstatus请求的都由status这个server处理

!/*.gif=controller #所有以. gif end of the request is not handled by controller this server, the following are the same meaning
!/*.jpg=controller
!/*.png=controller
!/*.css=controller
!/*.js=controller
!/*.htm=controller
!/*.html=controller

Here's "!" Similar to the "!" in Java, it means "not".

In this way, the Apache piece is configured well.

3, and then modify the Tomcat configuration: Here are two tomcat to configure.

Still open the first step of the Server.xml file, find <engine name= "Catalina" defaulthost= "localhost" > this line, add a sentence: jvmroute= "S1", The sentence should read: <engine name= "Catalina" defaulthost= "localhost" jvmroute= "S1" >. The S1 here is the name of the server that is configured in step two for load balancing. If the Tomcat's port is the S1 port in the second step, write S1 here, and the second tomcat should be S2.

In this way, the configuration is complete.

(iv) running

Enter the two Tomcat bin directory, perform two Tomcat startup.bat to start the two Tomcat, and then restart the Apache, then run to see the effect. If it's not an accident, two tomcat Windows should be your one time I print the log information, and this session is also shared.

Here, the cluster is set up and the load balance is realized.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.