How to configure server load balancer using Docker + nginx + tomcat7

Source: Internet
Author: User
Tags chmod crc32 epoll hash sendfile tomcat docker run

This article describes how to configure simple server load balancer on Docker. The host machine is Ubuntu 14.04.2 LTS, two CentOS containers, Nginx installed on the host machine, and Tomcat 7 installed on the two containers. The structure is as follows:

The principle of this solution is to map the host machine port and the docker container port (that is, a port accessing the host machine will be mapped to the corresponding port of the docker container ), then, Configure Nginx on the host to access a port on the host machine, and assign the port to the specified service address according to the rules. This completes load balancing.

Procedure

1. Prepare the host machine. The host machine is Ubuntu 14.04.2 LTS and is installed in Vmware. The installation method is not described in detail.

2. Install Nginx on the host machine by running the following command.

Sudo apt-get install nginx

After the installation is complete, you can check whether the Nginx installation is complete. You can run the following command to check the version and print out the Nginx version, which indicates that the installation is successful.

$ Nginx-v <br>
Nginx version: nginx/1.4.6 (Ubuntu)

Then, run the following command to start the Nginx service:

# Start the service
$ Sudo service nginx start
 
# View Service Status
$ Sudo service nginx status
* Nginx is running # indicates that the service is started.

Access http: // localhost in a browser and the result is as follows:

 

3. Download the Docker image and run the docker pull command. For more information about the image operations, see this article.

4. Start a container and set Port Ing. One of the commands is as follows:

Sudo docker run-t-I-p 3222: 22-p 3280: 80 87e5b6b3ccc1/bin/bash

The above Command starts a container in the standard bash output mode, sets Port ING, uses the-p parameter,-p host port: docker container port, the preceding command sets Port 3222 of the host machine to map to port 22 of the container, and Port 3280 of the host machine to port 80 of the container.

5. Install jre and tomcat7.0 in the container, and install jre

Wget-O jre-7u6-linux-x64.rpm? BundleId = 67387.
 
Yum install jre-7u6-linux-x64.rpm

Check whether jre is successfully installed.

Java-version
Java version "1.7.0 _ 06"
Java (TM) SE Runtime Environment (build 1.7.0 _ 06-b24)
Java HotSpot (TM) 64-Bit Server VM (build 23.2-b09, mixed mode)

Install tomcat7.0

Wget http://apache.fayea.com/tomcat/tomcat-7/v7.0.65/bin/apache-tomcat-7.0.65.tar.gz
Tar-zxvf apache-tomcat-7.0.65.tar.gz

Start tomcat, enter the extract Directory, cd to the bin directory, run the command, and enter the following information, indicating that tomcat is successfully started.

Bash startup. sh
Using CATALINA_BASE:/home/apache-tomcat-7.0.65
Using CATALINA_HOME:/home/apache-tomcat-7.0.65
Using CATALINA_TMPDIR:/home/apache-tomcat-7.0.65/temp
Using JRE_HOME:/usr
Using CLASSPATH:/home/apache-tomcat-7.0.65/bin/bootstrap. jar:/home/apache-tomcat-7.0.65/bin/tomcat-juli.jar
Tomcat started.

Because the default tomcat port is 8080, port 80 is set here, you need to modify the default port to 80, enter the conf Directory of the tomcat installation directory, and use vi to open the server. xml file.

<Connector port = "8080" protocol = "HTTP/1.1"
ConnectionTimeout = "20000"
RedirectPort = "8443" type = "regxph" text = "yourobjectname"/>

Change

<Connector port = "80" protocol = "HTTP/1.1"
ConnectionTimeout = "20000"
RedirectPort = "8443" type = "regxph" text = "yourobjectname"/>

Then, in this docker container, visit http: // localhost to visit the tomcat home page, indicating that tomcat is successfully installed and configured.

6. Go to the webapps/scripts file of tomcat, write "hello this is 172.17.0.2" in the file, and access http: // 172.17.0.2/hello.html on the host. The content is as follows:




7. As shown in the preceding steps, configure another container, but the ing Port set during container startup is different. The command is as follows:
   
Sudo docker run-t-I-p 3322: 22-p 3380: 80 87e5b6b3ccc1/bin/bash

Finally, enter "hello this is 170.17.0.3" in the webapps/scripts file in the tomcat installation directory of this container, and then access http: // 172.17.0.3/hello.html on the host, the following content is displayed:




8. After the container configuration is complete, the rest of the work is to configure the Nginx of the host machine for load balancing.

Go to the/etc/nginx Directory, use vim to edit nginx. conf, and add the following content to the http node:

Server {
Listen 80;
Server_name 192.168.1.106;
Location /{
Proxy_pass http: // blance;
                }
        }
 
Upstream blance {
Server localhost: 3280 weight = 5;
Server localhost: 3380 weight = 5;
        }

First, define an upstream, set the port ING and weight of the web server, and then define a server to listen to port 80. server_name is 192.168.1.106, which is the IP address of the host (which can be configured as a domain name ), "location/" means to listen to all requests under Port 80 and use the upstream set above as the proxy.

9. after the above configuration is completed and saved, visit http: // 192.168.1.106/hello.html in the browser and refresh the page. Sometimes the page displays "hello this is 172.17.0.3 ", sometimes "hello this is 172.17.0.2" is displayed, indicating that the configuration is successful. this completes the configuration of a simple server load balancer environment.


Nginx + Tomcat load balancing cluster solution

This solution is a project production application that I have previously worked on. It is currently running well. Please do a good job of testing how to use it in production.

System architecture

Download software package

[Root @ Nginx-node1 src] # cd/usr/local/src
[Root @ Nginx-node1 src] # wget ftp://ftp.csx.cam.ac.uk/pub/software/programming/pcre/pcre-8.02.tar.gz
[Root @ Nginx-node1 src] # wget http://nginx.org/download/nginx-0.8.34.tar.gz
[Root @ Nginx-node1 src] # wget http://www.keepalived.org/software/keepalived-1.1.19.tar.gz
[Root @ Nginx-node1 src] # chmod + x *
[Root @ Nginx-node1 src] # ls-l
-Rwxr-xr-x 1 root 241437 10-01 keepalived-1.1.19.tar.gz
-Rwxr-xr-x 1 root 621534 03-04 0nginx-0.8.34.tar.gz
-Rwxr-xr-x 1 root 1247730 03-31 pcre-8.02.tar.gz

[Root @ Nginx-node1 src] # cd/usr/local/src
[Root @ Nginx-node1 src] # wget ftp://ftp.csx.cam.ac.uk/pub/software/programming/pcre/pcre-8.02.tar.gz
[Root @ Nginx-node1 src] # wget http://nginx.org/download/nginx-0.8.34.tar.gz
[Root @ Nginx-node1 src] # wget http://www.keepalived.org/software/keepalived-1.1.19.tar.gz
[Root @ Nginx-node1 src] # chmod + x *
[Root @ Nginx-node1 src] # ls-l
-Rwxr-xr-x 1 root 241437 10-01 keepalived-1.1.19.tar.gz
-Rwxr-xr-x 1 root 621534 03-04 0nginx-0.8.34.tar.gz
-Rwxr-xr-x 1 root 1247730 03-31 pcre-8.02.tar.gz

Install Nginx

Install Nginx dependent packages

"] [Root @ Nginx-node1 src] # tar zxvf pcre-8.02.tar.gz
[Root @ Nginx-node1 src] # cd pcre-8.02
[Root @ Nginx-node1 pcre-8.02] #./configure
[Root @ Nginx-node1 pcre-8.02] # make & amp; make install

"] [Root @ Nginx-node1 src] # tar zxvf pcre-8.02.tar.gz
[Root @ Nginx-node1 src] # cd pcre-8.02
[Root @ Nginx-node1 pcre-8.02] #./configure
[Root @ Nginx-node1 pcre-8.02] # make & amp; make install

Install Nginx

"] [Root @ Nginx-node1] # cd ../
[Root @ Nginx-node1 src] # tar zxvf nginx-0.8.34.tar.gz
[Root @ Nginx-node1 src] # cd nginx-0.8.34
[Root @ Nginx-node1 nginx-0.8.34] #./configure -- prefix =/usr/local/nginx \
-- With-http_stub_status_module \
& Gt; -- with-http_ssl_module
[Root @ Nginx-node1 nginx-0.8.34] # make & amp; make install
[Root @ Nginx-node1 ~] # Vim/usr/local/nginx/conf/nginx. conf

"] [Root @ Nginx-node1] # cd ../
[Root @ Nginx-node1 src] # tar zxvf nginx-0.8.34.tar.gz
[Root @ Nginx-node1 src] # cd nginx-0.8.34
[Root @ Nginx-node1 nginx-0.8.34] #./configure -- prefix =/usr/local/nginx \
-- With-http_stub_status_module \
& Gt; -- with-http_ssl_module
[Root @ Nginx-node1 nginx-0.8.34] # make & amp; make install
[Root @ Nginx-node1 ~] # Vim/usr/local/nginx/conf/nginx. conf

Nginx configuration file

"] User website;
Worker_processes 4;

Error_log logs/error. log;
Pid logs/nginx. pid;
Worker_rlimit_nofile 65535;

Events {
Use epoll;
Worker_connections 10240;
}
Http {
Include mime. types;
Default_type application/octet-stream;
Server_names_hash_bucket_size 128;
Client_header_buffer_size 32 k;
Large_client_header_buffers 4 32 k;
Client_max_body_size 8 m;
Sendfile on;
Tcp_nopush on;
Keepalive_timeout 60;
Tcp_nodelay on;

Gzip on;
Gzip_min_length 1 k;
Gzip_buffers 4 16 k;
Gzip_http_version 1.0;
Gzip_comp_level 2;
Gzip_types text/plain application/x-javascript text/css application/xml;
Gzip_vary on;

Server_tokens off;

Upstream web # set the web Cluster pool

{
Ip_hash ;#
Server 192.168.0.141: 8080;
Server 192.168.0.142: 8080;
Server 192.168.0.143: 8080;
Server 192.168.0.144: 8080;
Server 192.168.0.145: 8080;
Server 192.168.0.146: 8080;

}

Upstream wap # set the wap cluster pool
{
Ip_hash;
Server 192.168.0.151: 8080;
Server 192.168.0.152: 8080;
Server 192.168.0.153: 8080;
Server 192.168.0.154: 8080;
Server 192.168.0.155: 8080;
Server 192.168.0.156: 8080;

}

Server {
Listen 80;
Server_name www. ****. com;

Location /{
Root html;
Index index.html index.htm;
Proxy_redirect off;
Proxy_set_header Host $ host;
Proxy_set_header X-Real-IP $ remote_addr;
Proxy_set_header X-Forwarded-For $ proxy_add_x_forwarded_for;
Proxy_pass http: // web; # note the settings here
        }

Error_page 500 502 503 x.html;
Location =/50x.html {
Root html;
        }

    }

Server {
Listen 80;
Server_name wap. ***. com;

Location /{
Root html;
Index index.html index.htm;
Proxy_redirect off;
Proxy_set_header Host $ host;
Proxy_set_header X-Real-IP $ remote_addr;
Proxy_set_header X-Forwarded-For $ proxy_add_x_forwarded_for;
Proxy_pass http: // wap; # Note: The setting is here.
        }
Error_page 500 502 503 x.html;
Location =/50x.html {
Root html;
        }

    }
}

"] User website;
Worker_processes 4;
 
Error_log logs/error. log;
Pid logs/nginx. pid;
Worker_rlimit_nofile 65535;
 
Events {
Use epoll;
Worker_connections 10240;
}
Http {
Include mime. types;
Default_type application/octet-stream;
Server_names_hash_bucket_size 128;
Client_header_buffer_size 32 k;
Large_client_header_buffers 4 32 k;
Client_max_body_size 8 m;
Sendfile on;
Tcp_nopush on;
Keepalive_timeout 60;
Tcp_nodelay on;
 
Gzip on;
Gzip_min_length 1 k;
Gzip_buffers 4 16 k;
Gzip_http_version 1.0;
Gzip_comp_level 2;
Gzip_types text/plain application/x-javascript text/css application/xml;
Gzip_vary on;
 
Server_tokens off;
 
Upstream web # set the web Cluster pool
{
Ip_hash ;#
Server 192.168.0.141: 8080;
Server 192.168.0.142: 8080;
Server 192.168.0.143: 8080;
Server 192.168.0.144: 8080;
Server 192.168.0.145: 8080;
Server 192.168.0.146: 8080;
 
}
 
Upstream wap # set the wap cluster pool
{
Ip_hash;
Server 192.168.0.151: 8080;
Server 192.168.0.152: 8080;
Server 192.168.0.153: 8080;
Server 192.168.0.154: 8080;
Server 192.168.0.155: 8080;
Server 192.168.0.156: 8080;
 
}
 
Server {
Listen 80;
Server_name www. ****. com;
 
Location /{
Root html;
Index index.html index.htm;
Proxy_redirect off;
Proxy_set_header Host $ host;
Proxy_set_header X-Real-IP $ remote_addr;
Proxy_set_header X-Forwarded-For $ proxy_add_x_forwarded_for;
Proxy_pass http: // web; # note the settings here
        }
 
Error_page 500 502 503 x.html;
Location =/50x.html {
Root html;
        }
 
    }
 
Server {
Listen 80;
Server_name wap. ***. com;
 
Location /{
Root html;
Index index.html index.htm;
Proxy_redirect off;
Proxy_set_header Host $ host;
Proxy_set_header X-Real-IP $ remote_addr;
Proxy_set_header X-Forwarded-For $ proxy_add_x_forwarded_for;
Proxy_pass http: // wap; # Note: The setting is here.
        }
Error_page 500 502 503 x.html;
Location =/50x.html {
Root html;
        }
 
    }
}

Allocation methods supported by Nginx Upstream


Nginx upstream currently supports five allocation methods

* 1. Round Robin (default)

Each request is distributed to different backend servers one by one in chronological order. If the backend servers are down, they can be removed automatically.

* 2. weight (weighted)

Specify the round-robin weight. weight is directly proportional to the access ratio, which is used when the backend server performance is uneven.

For example:

"] Upstream bakend {
Server 192.168.0.141 weight = 10;
Server 192.168.0.142 weight = 10;
}

"] Upstream bakend {
Server 192.168.0.141 weight = 10;
Server 192.168.0.142 weight = 10;
}

* 3. ip_hash

Each request is allocated according to the hash result of the access ip address, so that each visitor accesses a backend server at a fixed time, which can solve the session problem.

For example:

"] Upstream bakend {
Ip_hash;
Server 192.168.0.151: 80;
Server 192.168.0.152: 80;
}

"] Upstream bakend {
Ip_hash;
Server 192.168.0.151: 80;
Server 192.168.0.152: 80;
}

* 4. fair (third-party)

Requests are allocated based on the response time of the backend server. Requests with short response time are prioritized.

"] Upstream backend {
Server server1;
Server server2;
Fair;
}

"] Upstream backend {
Server server1;
Server server2;
Fair;
}

* 5. url_hash (third-party)

Requests are allocated based on the hash result of the access url so that each url is directed to the same backend server. The backend server is effective when it is cached.

For example, add a hash statement to upstream. Other parameters such as weight cannot be written to server statements. hash_method is the hash algorithm used.

"] Upstream backend {
Server squid1: 3128;
Server squid2: 3128;
Hash $ request_uri;
Hash_method crc32;
}

"] Upstream backend {
Server squid1: 3128;
Server squid2: 3128;
Hash $ request_uri;
Hash_method crc32;
}


* Settings:

The status of each device is set:

1. down indicates that the server before a ticket is not involved in the load
2. The default weight value is 1. The larger the weight value, the larger the load weight.
3. max_fails: the default number of failed requests is 1. If the maximum number of failed requests is exceeded, an error defined by the proxy_next_upstream module is returned.
4. fail_timeout: the pause time after max_fails fails.
5. backup: requests the backup machine when all other non-backup machines are down or busy. Therefore, this machine is under the least pressure.
Nginx supports setting multiple groups of server load balancer instances for unused servers.
Client_body_in_file_only is set to On. You can use the client post data record in the file for debugging.
Client_body_temp_path: Set the Directory of the record file to a maximum of three levels.
Location matches the URL. You can perform redirection or perform new proxy load balancing.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.