Nginx basic configuration

Source: Internet
Author: User
Nginx basic configuration
1. Advantages and disadvantages of Apache server and nginx:
We used Apache in large numbers as HTTPServer.
Apache has excellent performance and provides a variety of functions through modules.
1) Apache first supports concurrent responses to the client. after running the httpd daemon process, it will generate multiple child processes/threads at the same time, each child process/thread responds to client requests separately;
2) In addition, Apache can provide static and dynamic services. for example, PHP parsing is implemented through a module that supports PHP instead of CGI with poor performance (usually mod_php5, or apxs2 ).
3) disadvantages:
Therefore, Apache is usually called process-based Server, that is, the multi-process-based HTTPServer, because it needs to create a child process/thread for each user request to respond;
This disadvantage is that if there are a lot of concurrent requests (which is common in large portal websites), a lot of threads are required to occupy a lot of CPU and memory resources. Therefore, concurrent processing is not an Apache strength.
4) solution:
At present, another kind of WebServer has emerged, which is superior in terms of concurrency, called asynchronous servers asynchronous server. The most famous ones are Nginx and Lighttpd. The so-called asynchronous server is the event-driven in the event-driven mode. in addition to the user's concurrent requests, only one or several threads are required. Therefore, system resources are very small. These are also called lightweight web servers.
For example, for 10,000 concurrent connection requests, nginx may only use a few MB of memory, while Apache may need several hundred MB of memory resources.
2. the actual usage is as follows:
1) We do not need to introduce the single use of Apache as an HTTPServer, which is a very common application;
As mentioned above, Apache's support for PHP and other server scripts is achieved through its own modules, and the performance is superior.
2) We can also use nginx or lighttpd as HTTPServer alone.
Similar to nginx, lighttpd, and Apache, you can use various modules to expand the functions of the server. you can also configure various options through the conf configuration file.
For PHP, nginx and lighttpd do not have built-in modules to support PHP, but FastCGI.
The Lighttpd module provides CGI, FastCGI, SCGI, and other services. Lighttpd is capable of automatically spawning FastCGI backends as well as using externally spawned processes.
Nginx does not provide its own PHP processing function. it must use a third-party module to provide FastCGI integration for PHP.

----------------------------- Rewrites all non-http://bbs.it-home.org/access => http://bbs.it-home.org/
Server_name web90. ***. com;

If ($ host = "web90. ***. com "){
Rewrite ^ (. *) $ http://bbs.it-home.org/#1 permanent;
}

--------------------------------- Nginx stop/smooth restart # p # Page title # e #

Nginx signal control

Quickly disable TERM and INT
Close QUIT
HUP restarts smoothly and reloads the configuration file.
USR1 re-opens the log file, which has a large usage during log cutting.
USR2 smooth upgrade executable program
WINCH calmly closes the working process


1) stop with ease:

Kill-QUIT Nginx master process number

Kill-QUIT '/usr/local/webserver/nginx/logs/nginx. pid'


2) stop quickly:

Kill-TERM Nginx master process number

Kill-TERM '/usr/local/webserver/nginx/logs/nginx. pid'

Kill-INTN gsung master process number

Kill-INT '/usr/local/webserver/nginx/logs/nginx. pid'

3) force stop all nginx processes

Pkill-9 nginx


1) smooth restart

Kill-HUP nginx master process number

Kill-HUP '/usr/local/webserver/nginx/logs/nginx. pid'

----------------------------- Nginx. conf
# P # Paging Title # e #


Worker_processes 8;

Number of work derivative processes

Generally equal to the total number of cpu cores or twice the total number of cores, for example, two quad-core CPUs, the total number of cores is 8

  1. Events
  2. {
  3. Use epoll; // network I/o model used. epoll is recommended for linux systems, and kqueue is recommended for freebsd systems.
  4. Worker_connections 65535; // number of allowed connections
  5. }


  6. Location ~ . *\. (Gif | jpg | jpeg | png | bmp | swf) $ {access_log off; disable log expires 30d; // use the expires command to output the Header to implement local cache, 30 days} location ~ . *\. (Js | css) $ {access_log off; disable log expires 1 h ;}======================== each

  7. {

  8. Access_log off; disable log

  9. Expires 30d; // use the expires command to output the Header for local cache, 30 days

  10. }

  11. Location ~ . * \. (Js | css) $

  12. {

  13. Access_log off; disable log

  14. Expires 1 h;

  15. }



================================ Daily scheduled nginx log script cutting

  1. Vim/usr/local/webserver/nginx/sbin/cut_nginx_log.sh
  2. #! /Bin/bash
  3. # This script run at 00:00

  4. # The Nginx logs path
  5. Logs_path = "/usr/local/webserver/nginx/logs /";

  6. Mkdir-p $ {logs_path} $ (date-d "yesterday" + "% Y")/$ (date-d "yesterday" + "% m ") /# p # Paging Title # e #
  7. Mv $ {logs_path} access. log $ {logs_path} $ (date-d "yesterday" + "% Y")/$ (date-d "yesterday" + "% m ") /access _ $ (date-d "yesterday" + "% Y % m % d "). log
  8. Kill-USR1 'cat/usr/local/webserver/nginx. pid'



  9. Chown-R www: www cut_nginx_log.sh
  10. Chmod + x cut_nginx_log.sh


  11. Crontab-e
  12. 00 ***/bin/bash/usr/local/webserver/nginx/sbin/cut_nginx_log.sh


  13. #/Sbin/service crond restart

-------------- Configure gzip compression for nginx

Generally, the size of the compressed html, css, js, php, jhtml, and other files can be reduced to 25%. that is to say, the size of the original KB html file is only 25 kB after compression. This can undoubtedly save a lot of bandwidth and reduce the server load.
It is relatively simple to configure gzip in nginx

Generally, you only need to add the following configuration lines to the http segment of nginx. conf.

Reference
Gzip on;
Gzip_min_length 1000;
Gzip_buffers 4 8 k;
Gzip_types text/plain application/x-javascript text/css text/html application/xml;

Restart nginx
You can use the gzip check tool to check whether gzip is enabled on the webpage.
Http://gzip.zzbaike.com/

--------------- How to redirect an nginx error page

Error_page 404/404 .html;

The 404.html file must be in the html Directory of the nginx main directory. if you need to directly jump to another address after the 404 error occurs, you can directly set it as follows:


Error_page 404 http://bbs.it-home.org /;


Common errors such as 403 and 500 can be defined in the same way. # P # Paging Title # e #


The page size of the 404.html file must exceed 512 kB. Otherwise, it will be replaced by ie's default error page.

------------------------------ Virtual host configuration

  1. Server {
  2. Listen 80;
  3. Server_name localhost;
  4. Access_log/var/log/nginx/localhost. access. log;

  5. Location /{
  6. Root/var/www/nginx-default;
  7. Index. php index.html index.htm;
  8. }

  9. Location/doc {
  10. Root/usr/share;
  11. Autoindex on;
  12. Allow 127.0.0.1;
  13. Deny all;
  14. }

  15. Location/images {
  16. Root/usr/share;
  17. Autoindex on;
  18. }
  19. Location ~ \. Php $ {
  20. Fastcgi_pass 127.0.0.1: 9000;
  21. Fastcgi_index index. php;
  22. Fastcgi_param SCRIPT_FILENAME/var/www/nginx-default $ fastcgi_script_name;
  23. Include/etc/nginx/fastcgi_params;
  24. }
  25. }


  26. Server {
  27. Listen 80;
  28. Server_name sdsssdf.localhost.com;
  29. Access_log/var/log/nginx/localhost. access. log;

  30. Location /{
  31. Root/var/www/nginx-default/console;
  32. Index. php index.html index.htm;} location/doc {root/usr/share; autoindex on; allow 127.0.0.1; deny all;} location/images {root/usr/share; autoindex on ;} location ~ \. Php $ {fastcgi_pass 127.0.0.1: 9000; fastcgi_index index. p
  33. # P # Paging Title # e #

  34. }

  35. Location/doc {
  36. Root/usr/share;
  37. Autoindex on;
  38. Allow 127.0.0.1;
  39. Deny all;
  40. }

  41. Location/images {
  42. Root/usr/share;
  43. Autoindex on;
  44. }
  45. Location ~ \. Php $ {
  46. Fastcgi_pass 127.0.0.1: 9000;
  47. Fastcgi_index index. php;
  48. Fastcgi_param SCRIPT_FILENAME/var/www/nginx-default $ fastcgi_script_name;
  49. Include/etc/nginx/fastcgi_params;
  50. }
  51. }

---------------------- Monitoring

Location ~ ^/NginxStatus /{

Stub_status on; # Nginx status monitoring configuration
}



In this way, the Nginx running information is monitored through http: // localhost/NginxStatus/(the last/cannot be dropped:


Active connections: 1
Server accepts handled requests
1 5
Reading: 0 Writing: 1 Waiting: 0



The content displayed in NginxStatus is as follows: # p # Page title # e #

Active connections-number of active connections currently being processed by Nginx.
Server accepts handled requests -- 14553819 connections are processed in total, and 14553819 handshakes are created successfully (proving that there is no failure in the middle ), A total of 19239266 requests are processed (an average of 1.3 data requests are processed in each handshake ).
Reading -- number of headers that nginx reads to the client.
Writing -- number of headers that nginx returns to the client.
Waiting -- when keep-alive is enabled, this value is equal to active-(reading + writing), meaning Nginx has processed the resident connection waiting for the next request command.


------------------------------- Static file processing

Through regular expressions, we can let Nginx identify various static files


Location ~ \. (Htm | html | gif | jpg | jpeg | png | bmp | ico | css | js | txt) $ {

Root/var/www/nginx-default/html;
Access_log off;
Expires 24 h;
}

For examples, static HTML files, js script files, and css style files, we hope Nginx can directly process and return them to the browser, which can greatly speed up web browsing. Therefore, for such files, we need to use the root command to specify the file storage path. at the same time, because such files are not often modified, the expires command is used to control their cache in the browser, to reduce unnecessary requests. The expires command can Control the header labels of "Expires" and "Cache-Control" in the HTTP response (to Control the page Cache ). You can use the following format to write Expires:

Expires 1 January, 1970, 00:00:01 GMT;
Expires 60 s;
Expires 30 m;
Expires 24 h;
Expires 1d;
Expires max;
Expires off;

In this way, when you enter http: // 192.168.200.100/1.html, the system automatically jumps to var/www/nginx-default/html/1.html.

For example, all requests in the images path can be written as follows:

# P # Paging Title # e #

Location ~ ^/Images /{
Root/opt/webapp/images;
}


------------------------ Dynamic page request processing [cluster]

Nginx does not support popular dynamic pages such as JSP, ASP, PHP, and PERL, but it can send requests to backend servers through reverse proxy, for example, Tomcat, Apache, and IIS can process dynamic page requests. In the preceding configuration example, after defining some static file requests that are directly processed by Nginx, all other requests are sent to the backend server through the proxy_pass command (Tomcat is used in the preceding example ). The simplest proxy_pass usage is as follows:
Location/{proxy_pass http: // localhost: 8080; proxy_set_header X-Real-IP $ remote_addr;} the cluster is not used here, instead, send the request directly to the Tomcat service running on port 8080.

Proxy_pass http: // localhost: 8080;
Proxy_set_header X-Real-IP $ remote_addr;
}





Instead of using a cluster, we send requests directly to the Tomcat service running on port 8080 to process requests similar to JSP and Servlet.

When the page traffic is very large, multiple application servers are often required to perform operations on dynamic pages. in this case, we need to use the cluster architecture. Nginx defines a server cluster through the upstream command. in the previous complete example, we defined a cluster named tomcats, which contains six Tomcat services on three servers. The proxy_pass command is written as follows:


# Configuration information of all backend servers in the cluster
Upstream Upload ATS {
Server 192.168.0.11: 8080 weight = 10;
Server 192.168.0.11: 8081 weight = 10;
Server 192.168.0.12: 8080 weight = 10;
Server 192.168.0.12: 8081 weight = 10;
Server 192.168.0.13: 8080 weight = 10;
Server 192.168.0.13: 8081 weight = 10; # p # Page title # e #
}
Location /{
Proxy_pass http: // invalid ATS; # reverse proxy
Include proxy. conf;
}

---------------------- Stress test

  1. Wget http://bbs.it-home.org//soft/linux/webbench/webbench-1.5.tar.gz
  2. Tar zxvf webbench-1.5.tar.gz
  3. Cd webbench-1.5
  4. Make & make install

  5. # Website- c 100-t 10 http: // 192.168.200.100/info. php

  6. Parameter description:-c indicates the number of concurrencies, and-t indicates the duration (seconds)

  7. Root @ ubuntu-desktop:/etc/nginx/sites-available # webdesk- c 100-t 10 http: // 192.168.200.100/info. php
  8. Webbench-Simple Web Benchmark 1.5
  9. Copyright (c) Radim Kolar 1997-2004, GPL Open Source Software.

  10. Benchmarking: GET http: // 192.168.200.100/info. php
  11. Clients, running 10 sec.

  12. Speed = 19032 pages/min, 18074373 bytes/sec.
  13. Requests: 3172 susceed, 0 failed.


------------------------------- PPC provides detailed nginx configuration instructions



  1. # Running user
  2. User nobody;
  3. # Start a process
  4. Worker_processes 2; # p # Page title # e #
  5. # Global error logs and PID files
  6. Error_log logs/error. log notice;
  7. Pid logs/nginx. pid;
  8. # Working mode and maximum number of connections
  9. Events {use epoll;
  10. Worker_connections 1024;} # sets the http server and uses its reverse proxy function to provide load balancing support


  11. Http {# set the mime type
  12. Include conf/mime. types;
  13. Default_type application/octet-stream;
  14. # Set the log format
  15. Log_format main '$ remote_addr-$ remote_user [$ time_local] ''" $ request "$ status $ bytes_sent'' "$ http_referer" "$ http_user_agent" ''" $ gzip_ratio "';
  16. Log_format download '$ remote_addr-$ remote_user [$ time_local] ''" $ request "$ status $ bytes_sent'' "$ http_referer" "$ http_user_agent" ''" $ http_range "$ sent_http_content_range "';
  17. # Set request buffer
  18. Client_header_buffer_size 1 k;
  19. Large_client_header_buffers 4 4 k;

  20. # Enable the gzip module
  21. Gzip on;
  22. Gzip_min_length 1100;

  23. Gzip_buffers 4 8 k;
  24. Gzip_types text/plain;
  25. Output_buffers 1 32 k;
  26. Post pone_output 1460;

  27. # Setting access log
  28. Access_log logs/access. log main;
  29. Client_header_timeout 3 m;
  30. Client_body_timeout 3 m;
  31. Send_timeout 3 m;
  32. Sendfile on;
  33. Tcp_nopush on;
  34. Tcp_nodelay on;
  35. Keepalive_timeout 65;

  36. # Set the server list of server load balancer
  37. Upstream mysvr {# The weigth parameter indicates the weight. the higher the weight, the higher the probability of being allocated.
  38. # Enable port 3128 for Squid on the local machine # p # Page title # e #
  39. Server 192.168.8.1: 3128 weight = 5;
  40. Server 192.168.8.2: 80 weight = 1;
  41. Server 192.168.8.3: 80 weight = 6;
  42. }

  43. # Set virtual hosts
  44. Server {listen 80;
  45. Server_name 192.168.8.1 www.okpython.com;
  46. Charset gb2312;
  47. # Set access logs for the current virtual host
  48. Access_log logs/www.yejr.com. access. log main;
  49. # If you access/img/*,/js/*,/css/* resources, you can directly retrieve the local file without passing squid
  50. # This method is not recommended if there are many files, because the squid cache works better.
  51. Location ~ ^/(Img | js | css )/{
  52. Root/data3/Html;
  53. Expires 24 h;
  54. } # Enable the server load balancer location/{proxy_pass http: // mysvr; proxy_redirect off; proxy_set_header Host $ host; proxy_set_header X-Real-IP $ remote_addr; proxy_set_header X-Forwarded-For $ proxy_add_x _


  55. # Enable server load balancer "/"
  56. Location /{
  57. Proxy_pass http: // mysvr;
  58. Proxy_redirect off;
  59. Proxy_set_header Host $ host;
  60. Proxy_set_header X-Real-IP $ remote_addr;
  61. Proxy_set_header X-Forwarded-For $ proxy_add_x_forwarded_for;
  62. Client_max_body_size 10 m;
  63. Client_body_buffer_size 128 k;
  64. Proxy_connect_timeout 90; # p # Page title # e #
  65. Proxy_send_timeout 90;
  66. Proxy_read_timeout 90;
  67. Proxy_buffer_size 4 k;
  68. Proxy_buffers 4 32 k;
  69. Proxy_busy_buffers_size 64 k;
  70. Proxy_temp_file_write_size 64 k;
  71. }
  72. # Set the address for viewing Nginx status
  73. Location/NginxStatus {
  74. Stub_status on;
  75. Access_log on;


  76. Auth_basic "NginxStatus ";
  77. Auth_basic_user_file conf/htpasswd; # The content of the conf/htpasswd file is generated using the htpasswd tool provided by apache.
  78. }
  79. }
  80. }



Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.