Nginx Reverse Proxy and Load Balancing

Source: Internet
Author: User
Keywords nginx reverse proxy nginx reverse proxy load balancing nginx reverse proxy load balance
Last time I shared how to use docker to create nginx services. When it comes to nginx, we often use two important functions of nginx, reverse proxy and load balancing. Today I will share with you Nginx reverse proxy and load balancing.
Simple Application Server
USD1.00 New User Coupon
* Only 3,000 coupons available.
* Each new user can only get one coupon(except users from distributors).
* The coupon is valid for 30 days from the date of receipt.

What is a reverse proxy?
When it comes to reverse proxy, everyone will generally mention: forward proxy, forward proxy is proxy, for example, when we write crawler, IP is blocked, Google 404, at this time we will use proxy, you can understand the proxy As a springboard, our computer can't access Google, then we will visit the server that can access Google. Generally, I use proxy in these two cases. There may be more advanced usage, and I have never used it. After talking about proxy, let’s talk about our protagonist today-reverse proxy

definition
Reverse Proxy means to use a proxy server to accept connection requests on the internet, then forward the request to a server on the internal network, and return the result from the server to the client requesting the connection on the internet. At this time, the proxy server acts as a reverse proxy server externally

Features
Hide the server's IP address from the client. Speed up Web requests by caching static resources. It is safer, because any request from the Internet must first go through a proxy server, which can be protected here (such as ip filtering)

Implementation:
location / {
    proxy_pass_header Server;
    proxy_set_header Host $http_host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Scheme $scheme;
    proxy_pass http://abc.com;
}
When you see the above content for the first time, most people may still not understand. Simply put, you have two servers A and B. Your code is placed on both servers. When users access your content, they will Visit the A server, and the A server will request the content of B to be returned to the user. The user does not know that the request is B. This is a reverse proxy.

Let's demonstrate the reverse proxy through the following simple steps
By creating Nginx service with Docker in the previous article, we can quickly create 3 nginx services to simulate 3 servers
  • First create a nginx configuration file
#Create nginx configuration file
touch nginx.conf
# Create three folders a, b, and c to store the project
mkdir a b c
  • Write Nginx configuration file
# Nginx configuration file, the main function here is to specify the port and project path
events {
    #Number of concurrent connections per worker process
    worker_connections 1024;
}
http {
        server {
               #Listening port
                listen 80;
                #Binding domain
                server_name localhost;
                #Project Path
                root /nginx/share/nginx/html;
        }
}
  • Start the Nignx service
#Create a container
docker run -p 80:80 -v $PWD/nginx/nginx.conf:/etc/nginx/nginx.conf
 -v $PWD/html/a:/usr/share/nginx/html --name a-server -d nginx
#Create b container
docker run -p 80:8081 -v $PWD/html/b:/usr/share/nginx/html
--name b-server -d nginx
#Create c container
docker run -p 80:8082 -v $PWD/html/c:/usr/share/nginx/html
--name c-server -d nginx

#Parameter explanation
    -p 8080:80 maps port 80 of the container to port 8080 of the host
    -v $PWD/a.conf:/etc/nginx/nginx.conf
    Mount a.conf in the current directory on the host to the container's /etc/nginx/nginx.conf
    -v $PWD:/a Mount the current directory a in the host to the container's /usr/share/nginx/html
    --name the name of the container
    -d Run the container in the background and return the container ID
  • Determine whether Nginx has started successfully
docker ps -a to see if the container started successfully
or
curl 127.0.0.1
curl 127.0.0.1:8081
curl 127.0.0.1:8082
  • Modify the configuration file
events {
    #Number of concurrent connections per worker process
    worker_connections 1024;
}
http {
    server {
        #Listening port
        listen 80;
        #Binding domain
        server_name localhost;
        #Project Path
        root /nginx/share/nginx/html;

        # Reverse proxy
        location /b {
            rewrite ^/b(.*) /$1 break;
            proxy_pass http://127.0.0.1:8081
        }

        # Reverse proxy
        location /c {
            rewrite ^/c(.*) /$1 break;
            proxy_pass http://127.0.0.1:8082
        }
    }
}

At this time, we have implemented the direction proxy, which can be accessed in the browser
http://127.0.0.1 output this is a
http://127.0.0.1/b output this is b
http://127.0.0.1/c output this is c
Another protagonist today is load balancing
What is load balancing? Many novices feel very unpredictable when they hear about load balancing. In fact, it is not difficult to implement. Let's modify the configuration to implement a simple load balancing.
Let's first understand load balancing

definition
It is used to distribute load among multiple computers, network connections, CPUs, disk drives, or other resources to optimize resource usage, maximize throughput, minimize response time, and avoid overload. Using multiple server components with load balancing instead of a single component can improve reliability through redundancy

Features
Distribute traffic to multiple application servers to improve the scalability and reliability of web applications

Implementation
http {
    upstream myServer {
        server 127.0.0.1:8081;
        server 127.0.0.1:8082;
    }

    server {
        listen 80;
        location / {
            proxy_pass http://myServer;
        }
    }
}
Regarding the very official definition above, I will explain it in vernacular to allocate user requests to servers with low pressure, thereby reducing server pressure
Let's demonstrate load balancing through a simple Demo

Modify the configuration file nginx.conf just now
events {
    #Number of concurrent connections per worker process
    worker_connections 1024;
}
http {
    #Define the IP and device status of the load balancing device
    upstream myServer {
        server 127.0.0.1:8081;
        server 127.0.0.1:8082;
    }

    server {
        #Listening port
        listen 80;
        #Binding domain
        server_name localhost;
        #Project Path
        root /nginx/share/nginx/html;

        location / {
            proxy_pass http://myServer;
        }
    }
}
Open the browser and enter http://127.0.0.1. We refresh the browser and output this is b or this is a

At this point, the basic configuration of reverse proxy and load balancing is shared.

But witty, you may ask some questions. Through load balancing, when a user accesses server B, the next time he refreshes, he will visit server C. Can he continue to access server A? Of course, Nginx provides a method. The specific configuration content is as follows
upstream myServer {
  #weight Specify the polling probability. If there are 5 requests now,
  #There are 4 requests to the B server, 1 request to the C server,
  #Generally we will increase the weight of the higher server configuration

  server b.com weight=4;
  server c.com;

  #Use the client's IP address to determine which server should be selected for the next request
  ip_hash;
}

This time I mainly shared Nginx's reverse proxy and load balancing. Of course, the content of sharing is still relatively basic. Although Nginx is not large, it has comprehensive functions. For more modules and configuration files, you can check the official documentation.

Precautions
If nginx does not enable hot update, you must restart to modify the configuration file
#docker Container restart command
docker restart id (container ID)
#Not using nginx installed by docker
service nginx restart
Configuration of nginx in reverse proxy proxy_pass
location /c {
         #If it is a binding directory, the rewrite here must be written
         rewrite ^/c(.*) /$1 break;
         # The ip address of the machine must be filled in here in the docker container, not 127.0.0.1
         proxy_pass http://117.189.124.134:8082;
    }

Summary: Nginx's reverse proxy and load balancing configuration is relatively simple. The proxy_pass and upstream are mainly used to avoid the above pits. There should be no problem. I am only in the introductory stage of Nginx. If there are errors, please correct me.
Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.