Nginx Learning Essays

Source: Internet
Author: User
Tags sendfile

Off Topic

The first job in the project has DBA and operations, so usually only focus on the development part, the database and server attention is relatively small, remember that there is a user feedback site is slow, boss let me contact operations to see if it is a server problem, at that time also do not know what nginx is a thing. This project, after the development of just to do two server cluster, to use nginx forwarding, rubbing this opportunity, hurriedly learn to learn new skills.

Install Nginx locally

1, download Nginx, currently the latest version nginx-1.9.6.zip, because it is only local testing, so the download is the version of Windows.

2, unzip, put in the specified directory.

3, configure the environment variables, Nginx_path and add to path, step-up and configure the JDK. The advantage is that you do not have to go to the Nginx directory when you execute the command.

Check whether the installation was successful:

C:\users\nginxtest>nginx-vnginx version:nginx/1.9.6

It seems like to go into the Nginx installation directory to execute the following command .... Why ...

Several common commands

# start Nginx start Nginx # Reboot Nginx-s reload# stop Nginx-s Stop# Check that the configuration is correct Nginx-# view Nginx process "imagename eq nginx.exe"

Start Nginx

Start with the start Nginx command

Then you can access it by entering localhost in the browser.

Modify the configuration, load balancing function

Preparation: Two tomcat, different port numbers

localhost:7080

localhost:9080

Practice one: forward to Localhost:9080 's Tomcat home page when accessing localhost:80

server {    # listening 80 port localhost:80    Listen       ;    server_name  localhost;     # charset koi8-r;    # access_log  logs/host.access.log  main; /{        proxy_pass http://localhost:9080;    }     

Practice two, visit localhost:80 when forwarded to localhost:9080/springweb/, must springweb/

server {    # listening 80 port localhost:80    Listen       ;    server_name  localhost;     # charset koi8-r;    # access_log  logs/host.access.log  main; /{        proxy_pass http://localhost:9080/springweb/;    }     

Practice three, reverse proxy, request forwarding

 #   true forwarding address   9080 weight=1 max_fails=2 fail_timeout=30S;    Server localhost:  7080 weight=1 max_fails=2 fail_timeout=30s;         } server { #   Listen 80 Port localhost:80         Listen 80;        server_name localhost;         #  charset koi8-r;  #  access_log Logs/host.access.log main;   location / {Proxy_pass http:         real_path; }

Appendix I, complete configuration file description

Reference: https://www.nginx.com/resources/wiki/start/topics/examples/full/

#user nobody;#Nginx process number, the recommended setting is equal to the total CPU core number#After set to N, tasklist/fi "imagename eq nginx.exe" after n+1 processWorker_processes 3;#error log fileError_log logs/Error.log;#error_log logs/error.log notice;#error_log logs/error.log info;#pid logs/nginx.pid;#default maximum number of concurrent connections 1024x768Events {Worker_connections1024;}#setting up an HTTP serverhttp {#file name extension and file type mapping tableinclude mime.types; #Default File TypeDefault_type application/octet-stream; #Log_format main ' $remote _addr-$remote _user [$time _local] "$request" '    #' $status $body _bytes_sent ' $http _referer '    #' "$http _user_agent" "$http _x_forwarded_for";    #access_log Logs/access.log main;    #Open the efficient file transfer mode, sendfile instruction Specifies whether Nginx calls the Sendfile function to output the file, for the normal application is set to onsendfile on; #Prevent network congestion    #Tcp_nopush on;    #Long connection time-out, unit is seconds    #keepalive_timeout 0;Keepalive_timeout 65;    Fastcgi_intercept_errors on; #turn on gzip compression output    #gzip on;        #Load BalancingUpstream real_path{#load Balancing, weight is a weight that can be defined according to machine configuration. The Weigth parameter represents weights, and the higher the weight, the greater the probability of being allocated. Server localhost:9080 weight=1 max_fails=2 fail_timeout=30s; Server localhost:7080 weight=4 max_fails=2 fail_timeout=30s; }    #configuration of the virtual hostserver {#Listening PortListen 80; #domain names can have multiple, separated by spacesserver_name localhost; #CharSet koi8-r;        #access_log Logs/host.access.log main;        #Enable reverse proxy for "/"        #If you change to location/tomcat/, then access http://localhost/tomcat/Location/{proxy_pass http:real_path/; } error_page404/404. html; #Redirect Server error pages to the static page/50x.html        #Error_page 500 502 503 504/50x.html; Location= /50x.html {root html; }}

Appendix II, Nginx upstream currently supports 5 different ways of distribution

Reference: http://blog.chinaunix.net/uid-20662363-id-3049712.html

1. Polling (default)
Each request is assigned to a different back-end server in chronological order, and can be automatically rejected if the backend server is down.
2, Weight
Specifies the polling probability, proportional to the weight and access ratios, for situations where the performance of the backend server is uneven.
For example:

upstream Bakend {    192.168.0.14 weight=10;     192.168.0.15 weight=10

3, Ip_hash
Each request is allocated according to the hash result of the access IP, so that each visitor has fixed access to a back-end server that resolves the session issue.
For example:

upstream Bakend {    ip_hash;     192.168.0.14:88;     192.168.0.15:80

4. Fair (third party)
The response time of the back-end server is allocated to the request, and the response time is short of priority allocation.

Upstream backend {    server server1;    Server Server2;    

5. Url_hash (third party)
Assign requests by the hash result of the access URL so that each URL is directed to the same back-end server, which is more efficient when the backend server is cached.
Example: Add a hash statement in upstream, the server statement can not write weight and other parameters, Hash_method is the use of the hash algorithm

Upstream Backend {    server squid1:3128;    Server Squid2:3128;     $request _uri ;    Hash_method CRC32;}

Nginx Learning Essays

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.