Project Practice: 2.2-Implementation of nginx reverse proxy load balancing, dynamic/static separation, and cache, 2.2 nginx

Source: Internet
Author: User
Tags install openssl nginx reverse proxy

Project Practice: 2.2-Implementation of nginx reverse proxy load balancing, dynamic/static separation, and cache, 2.2 nginx

 

General Project Flowchart, See http://www.cnblogs.com/along21/p/7435612.html

Experiment 1: Implementing reverse proxy load balancing and dynamic/static Separation

1. Environment preparation:

Machine name

IP configuration

Service role

Remarks

Nginx

VIP: 172.17.11.11

Reverse Proxy Server

Enable proxy

Set monitoring and Scheduling

Rs01

RIP: 172.17.22.22

Backend servers

Stasic-srv Group

Rs02

RIP: 172.17.1.7

Backend servers

Stasic-srv Group

Rs01

RIP: 172.17.77.77

Backend servers

Defautl-srv Group

Rs02

RIP: 172.17.252.111

Backend servers

Defautl-srv Group

2. Download, compile, and install tengine

Cause: Although nginx's built-in monitoring mode can be used, it is not easy to understand. tengine's monitoring mode is easy to set and easy to understand. It is a secondary development in nginx,Similar to nginx

(1) official website download: http://tengine.taobao.org also supports Chinese

Unpack tar tengine-2.1.1.tar.gz

Cd tengine-2.1.1

(2) download the dependent package

Yum-y groupinstall "development tools"

Yum install openssl-devel-y

Yum install pcre-devel-y

(3) Compile and install

./Configure -- prefix =/Usr/local/Tengine specifies the directory after installation

Make & make install

3. Set the configuration file of the proxy server

Cd/usr/local/tengine/conf

Cp nginx. conf/usr/local/tengine/conf/If nginx is available on the machine, you can copy the configuration file directly. If no nginx is available, you can set it yourself.

I will not set the global and http segments of vim nginx. conf. The default is good.

① Define upstream: backend server group

upstream lnmp-srv1 {        server 172.17.22.22:80;        server 172.17.1.7:80;        check interval=3000 rise=2 fall=5 timeout=1000 type=http;        check_http_send "HEAD / HTTP/1.0\r\n\r\n";        check_http_expect_alive http_2xx http_3xx;}upstream lnmp-srv2 {        server 172.17.77.77:80;        server 172.17.252.111:80;        server 172.17.1.7:80;        check interval=3000 rise=2 fall=5 timeout=1000 type=http;        check_http_send "HEAD / HTTP/1.0\r\n\r\n";        check_http_expect_alive http_2xx http_3xx;}

② Set in the location segment of the server segmentDynamic/static Separation

Server {listen 80; location/stats {# Set the listener page check_status;} location~ *. Jpg |. png |. gif |. Jpeg ${Proxy_pass Http: // static-srv;} Location~ *. Css |. js |. html |. Xml ${Proxy_passHttp: // static-srv;} Location/{proxy_passHttp: // default-srv;}}

4. Start the tengine Service

① Compile and install the path to enable the Service

Cd/usr/local/tengine/sbin/

./Nginx start tengine

./Nginx-s stop

② You can also add it to auto-start upon startup.

Add and modify cd/usr/lib/systemd/system/nginx. service, Centos 7

Cd/etc/init. d/Centos 6

 

5. Enable the backend web Service

Systemctl start nginx

Systemctl start php-fpm

Systemctl start mariadb

6. Test

(1) test whether the reverse proxy is successful. http: // 172.17.11.11/web page access is successful.

(2) test status page http: // 172.17.11.11/stats

(3)Test static/dynamic Separation

Services in the backend server group of the static page are down and no static things are found.

Experiment 2: nginx implements the cache Function

Requirement Analysis: Why cache?

The most fundamental purpose of caching isIn order to improve website performance and reduce the pressure on the database due to frequent data access. Reasonable caching also reduces the pressure on CPU during program operations. In the modern structure of a computer, the data in the Operation memory is faster than the data stored on the hard disk by N orders of magnitude, and the data in the simple text structure is operated, it is N orders of magnitude faster than the data in the database.

For example, each time a user accesses a website, the website title must be read from the database. It takes 15 milliseconds to read each time. If there are 100 users (access at the same time is not considered first ), if you access the database 10 times per hour, you need to read the database 1000 times, which takes 15000 milliseconds. if the page is directly converted to the page cache, you do not need to read the page from the database every time you access it, which greatly improves the website performance.

Principle:

Cache data is divided into two parts (index, data ):
① Index for data storage, stored in memory;
② Store cached data in disk space;
Analysis: for example, the.jpg cache is built, and its uri is used as the index in the memory, and the actual image data is stored in the disk space. There are many caches, so the index storage directory needs to be layeredUri for hash operation, Which is converted into a hexadecimal value.The last number serves as a level-1 directory.The second-level directory can be named [0-F]. The second-level directory can be named [00-ff] with the last random number of 2 or 3 digits. The third-level directory is like this...

1. Environment preparation: see the preceding experiment. The experiment structure is as follows:

2. Set the configuration file of the proxy server

① First define the cache in the http segment

Proxy_cache_path/data/cache levels = keys_zone = proxycache: 10 m inactive = 120 s max_size = 1g

Analysis: Define a cache with the path under/data/cache; Level 3 directory, level 1 [0-f] random numbers, level 2 and level 3 [00-ff] random numbers; Define the cache name proxycache, the cache size is 10 MB; the survival time is 120 s; the maximum disk space is 1 GB.

② Reference the cache in the server segment

Proxy_cache proxycache; # reference the cache space defined above. The same cache space can be used in several places with proxy_cache_key $ request_uri; # perform the hash operation proxy_cache_valid 200 302 1 h on the uri; #1 hour proxy_cache_valid any 1 m; # other caches: 1 minute
Add_header Along-Cache "$ upstream_cache_status form $ server_addr"; # Add a header to the Request Response, indicating the cache returned from the server

3. Test: Access http: // 172.17.11.11/

In F12 debugging mode, you can see that the specified header exists.

Cache directory is also generated

 

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.