Simple understanding of nginx---load balancing

Source: Internet
Author: User
Tags hash sendfile nginx reverse proxy

medium and large projects will take into account the distribution, the previous several articles focused on the technical cluster of data processing. Let's look at the load balancing –nginx for servers today. He can also decide to place the request on that service, in addition to the static resource processing. installation of Nginx

Point I download the download well, we can directly click to open the service

Friendship Reminder: Nginx's path cannot have Chinese characters
Or we can use the cmd command to open the service. Execute commands first in the D:\Chirs\Downloads\nginx-1.11.11\nginx-1.11.11 directory

Nginx
By the way, close the command:
Nginx-s  
It was so simple, Nginx opened. To see if the success is turned on, simply open the browser input 127.0.0.1 or localhost

basic commands for Nginx

The start command is mentioned above: Nginx.exe

Reboot: Nginx.exe-s Reload
OFF: nginx.exe-s stop
Detection configuration legality: nginx.exe-t

Friendly reminder: Try to reboot the command when restarting. Some people like to turn off the service before opening the service. This will affect the use of Nginx if the modified configuration is incorrect. And if you use the restart command, even if the modified configuration is wrong, only affect the new modified functionality, the previous service can still be performed. Nginx and Tomcat load balancing in the current large data age of tens of thousands of traffic, distributed is one of the factors that we must consider. Then Nginx can help us ease the pressure of big data. We can send the request to the different tomat through the Nginx reverse proxy, thus greatly alleviates the pressure of our server. prepare two Tomcat to achieve load balancing we have to have multiple servers so that Nginx can distribute requests evenly across different services. So we're only going to have two servers (TOMCAT) to cover the effect. It's OK to configure different access ports. nginx.conf

First posted a section of the official website of Windows configuration file. Explain the following article by article.

#user nobody;

Worker_processes 1;
#error_log Logs/error.log;
#error_log Logs/error.log Notice;

#error_log Logs/error.log Info;


#pid Logs/nginx.pid;


Events {worker_connections 1024;}
    HTTP {include mime.types;

    Default_type Application/octet-stream; #log_format Main ' $remote _addr-$remote _user [$time _local] "$request" ' # ' $status $body _bytes_sen

    T "$http _referer" "$http _user_agent" "$http _x_forwarded_for";

    #access_log Logs/access.log Main;
    Sendfile on;

    #tcp_nopush on;
    #keepalive_timeout 0;

    Keepalive_timeout 65;

    #gzip on;
        server {Listen 80;

        server_name localhost;

        #charset Koi8-r;

        #access_log Logs/host.access.log Main;
            Location/{root HTML;
        Index index.html index.htm;

        } #error_page 404/404.html; # REDIRECT Server error pages to the static page/50x.html # Error_page 502 503 504/50x.html;
        Location =/50x.html {root html;    # Proxy The PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ {#
        Proxy_pass http://127.0.0.1;
        # Pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # #location ~ \.php$ {
        # root HTML;
        # Fastcgi_pass 127.0.0.1:9000;
        # Fastcgi_index index.php;
        # Fastcgi_param Script_filename/scripts$fastcgi_script_name;
        # include Fastcgi_params;
        #} # Deny access to. htaccess files, if Apache ' s document Root # concurs with Nginx ' s one #
        #location ~/\.ht {# deny all; #}} # Another virtual host using mix of ip-, name-, and port-based configuration # #server {# l
   Isten 8000; # Listen somename:8080;

    # server_name somename alias Another.alias;
    # location/{# root HTML;
    # index index.html index.htm;
    #} # HTTPS Server # #server {# listen 443 SSL;

    # server_name localhost;
    # ssl_certificate Cert.pem;

    # Ssl_certificate_key Cert.key;
    # Ssl_session_cache shared:ssl:1m;

    # ssl_session_timeout 5m; # ssl_ciphers high:!anull:!
    MD5;

    # ssl_prefer_server_ciphers on;
    # location/{# root HTML;
    # index index.html index.htm;
 #    }
    #}

}

#user nobody; : The user attribute is not set in Windows, the comment is written clearly nobody, but in the Linux system we recommend writing user Nginx (user) Nginx (group).

Worker_processes 1; : normal work process is cpu*2

Error_log:nginx the file address of the error log record

PID: In Windows, each process has a PID in the background.

Events: Inside set some properties such as the number of connections worker_connections

Http:http is the nginx of load balancing by setting HTTP

Include Mime.types; : Set MIME type, type is defined by Mime.type file

Default_type Application/octet-stream; : Set the default request type

Log_format: The output format of the log.

Log format parameter explanation:

$remote _addr and $http_x_forwarded_for are used to record the IP address of the client;

$remote _user: Used to record the client user name;

$time _local: Used to record access time and time zone;

$request: The URL used to record the request and the HTTP protocol;

$status: Used to record request status; Success is 200,

$body _bytes_sent: Records are sent to the client file body content size;

$http _referer: Used to record links from the page to visit;

$http _user_agent: Record the relevant information of the customer's browser;

Sendfile on; : The sendfile directive Specifies whether Nginx calls the Sendfile function (zero copy) to output the file, which must be set to on for normal applications. If used for downloading applications such as disk IO heavy load applications, it can be set to off to balance disk with network IO processing speed and reduce system uptime.
 

Tcp_nopush on;: This option allows or disables the use of the Socke tcp_cork option, which is used only when using the Sendfile

Detailed nginx.conf modification above we have described in detail the various parameters of the Nginx set. Before configuring, let's take a look at Nginx initial configuration HTTP location

Through the above we can see that when we visit localhost (server_name): (listen), Nginx will automatically jump to the HTML folder index.html or index.htm page.

What we're going to do now is to visit Nginx randomly and jump to our designated tomcat. I believe we all know, is to modify the location mapping path on the line. But our mapping path is a selector, so we first construct a selector to come out

Upstream Mynginxserver {
        server 192.168.1.183:8888 weight=2;
        Server 192.168.1.183:8080 weight=1;
  }

One of the weight is the weight, that is, nginx in a random choice of time to choose a service based on this weight.

Then we map the location mapping path to the Mynginserver on the line.

Location/{
            proxy_pass http://mynginxserver;
        }

Note: Proxy_pass must begin with http://. After everything is configured we restart Nginx (nginx.exe-s reload). This time, let's take a look at the effects of two tomcat visits.
Note that the port of the path is different
TOMCAT1:

TOMCAT2:

And then we're at this time accessing Nginx port: http://192.168.1.183:802/springtests/


The effect is the same request but the requested page is two pages, actually the requested two Tomcat. We actually run the exact same project on two Tomcat. So the user experience is the same project to use, but we have achieved load balancing. Several strategies of load balancing

Above we have achieved load balancing. Nginx provides me with a strategy for load balancing.

Default Policy – Polling:

Upstream Mynginxserver {
        server 192.168.1.183:8888;
        Server 192.168.1.183:8080;
    }

Request different Tomcat in the requested time order. If one is down, it is automatically ignored. Minimum link: As the name implies is in the choice of the time who has the fewest number of connections, select who

Upstream Mynginxserver {
    least_conn;
        Server 192.168.1.183:8888;
        Server 192.168.1.183:8080;
    }
Weight weight: This is the way I implement load balancing above. The default value is 1. It is based on the weight of Tomcat when choosing which tomcat to send specifically.
Upstream Mynginxserver {
        server 192.168.1.183:8888 weight=2;
        Server 192.168.1.183:8080 weight=1;
    }
Ip_hash: This is based on the current request IP, according to the IP to calculate the corresponding hash value, and then select the corresponding tomcat according to the hash value. The effect is that a client accesses the same tomcat from the beginning to the end. The session here is the same.
Upstream Mynginxserver {
    ip_hash;
        Server 192.168.1.183:8888;
        Server 192.168.1.183:8080;
    }
Url_hash: And Ip_hash is an effect
Upstream Mynginxserver {
    hash $request _url;
        Server 192.168.1.183:8888;
        Server 192.168.1.183:8080;
    }
Fair: According to the response time. Whose response time is short, even who.
Upstream Mynginxserver {
        server 192.168.1.183:8888;
        Server 192.168.1.183:8080;
        Fair;
    }
Nginx Address Mapping

Nginx in addition to as a server load balancing, there is also a bright spot is address mapping. Used as a resource server. In our web development, we often need to upload resources to the service. We can't always put our resources on Tomcat. This will greatly increase the pressure on Tomcat. And this data is easily lost. Nginx can solve this problem.

In fact, the above implementation of load balancing when the address map has been implemented. Location is the bridge of address mapping.

Location ~ ^/images/(. *) 

         # location ~ ^/images/(. *\.jpg) 

         # "." Denotes any character, "*" means any number,

         # "\.jpg" indicates the file

      {

         expires 1s of the jpg suffix name;

         Alias D:/zxh/test/$1;   # "$" table is the content of the location ()

         index  index.html index.htm;

         break;

      

The location above indicates that the server+port+^images^.^ map is mapped to the D:/zxh/test folder.
For example, I access in the browser: http://192.168.1.183:802/images/test.jpg

At this time Nginx will go to visit D:/zxh/test the next time there are test.jpg pictures.

Above is a reference on the internet for others to write the collation. Don't like to spray. @

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.