Nginx common application technical guide

Source: Internet
Author: User

. Nginx log Cutting
# Contab-e
59 23 ***/usr/local/sbin/logcron. sh/dev/null 2> & 1
[Root @ count ~] # Cat/usr/local/sbin/logcron. Sh

  1. #! /Bin/bash
  2. Log_dir = "/data/logs"
  3. Time = 'date + % Y % m % d'
  4. /Bin/mv $ {log_dir}/access_linuxtone.org.log $ {log_dir}/access_count.linuxtone.org. $ time. Log
  5. Kill-usr1 'cat/var/run/nginx. Pi'

Copy code

More log analysis and processing on the follow (also welcome to the discussion): http://bbs.linuxtone.org/forum-8-1.html
2. Use AWStats to analyze nginx logs
Set the nginx log format and use AWStats for analysis.
See http://bbs.linuxtone.org/thread-56-1-1.html
3. How does nginx not record partial logs?
There are too many logs, several Gbit/s each day, with less logs. The following configuration can be written to the server {} segment.
Location ~ . *. (JS | JPG | JPEG | CSS | BMP | GIF) $
{
Access_log off;
}
11. nginx cache service configuration
To Cache the file locally, add the following sub-parameters:

  1. Proxy_store on;
  2. Proxy_store_access User: RW group: RW all: RW;
  3. Proxy_temp_path cache directory;

Copy code

Where,
Proxy_store on is used to enable the local cache function,
Proxy_temp_path is used to specify the directory in which the cache is stored, for example, proxy_temp_path HTML;
After the configuration in the previous step, although the file is cached on the local disk, the file is still pulled from the remote end in each request. To avoid pulling files from the remote end, you must modify

  1. Proxy_pass:
  2. If (! -E $ request_filename ){
  3. Proxy_pass http: // mysvr;
  4. }

Copy code

If the requested file does not exist in the directory specified by the local proxy_temp_path, then the request is pulled from the backend.
More advanced applications can study ncache. For more information, see the ncache related posts in the http://bbs.linuxtone.org.
12. nginx Load Balancing
1. Basic nginx Load Balancing knowledge
Currently, nginx upstream supports four allocation methods.
1) Round Robin (default)
Each request is distributed to different backend servers one by one in chronological order. If the backend servers are down, they can be removed automatically.
2) Weight
Specify the round-robin probability. weight is proportional to the access ratio, which is used when the backend server performance is uneven.
2) ip_hash
Each request is allocated according to the hash result of the access IP address, so that each visitor accesses a backend server at a fixed time, which can solve the session problem.
3) Fair (third party)
Requests are allocated based on the response time of the backend server. Requests with short response time are prioritized.
4), url_hash (third-party)
2. nginx Server Load balancer instance 1

  1. Upstream bbs.linuxtone.org {# define the IP address and device status of the Server Load balancer Device
  2. Server 127.0.0.1: 9090 down;
  3. Server 127.0.0.1: 8080 Weight = 2;
  4. Server 127.0.0.1: 6060;
  5. Server 127.0.0.1: 7070 backup;
  6. }

Copy code

Add
Proxy_pass http://bbs.linuxtone.org /;
The status of each device is set:
A) Down indicates that the server before a ticket is not involved in the load
B) The default weight value is 1. The larger the weight value, the larger the load weight.
C) max_fails: the default number of failed requests is 1. If the maximum number of failed requests is exceeded, an error defined by the proxy_next_upstream module is returned.
D) fail_timeout: The pause time after max_fails fails.
E) backup: Requests the backup machine when all other non-Backup machines are down or busy. Therefore, this machine is under the least pressure.
Nginx supports setting multiple groups of Server Load balancer instances for unused servers.
Client_body_in_file_only is set to on. You can use the client post data record in the file for debugging.
Client_body_temp_path: Set the directory of the record file to a maximum of three levels.
Location matches the URL. You can perform redirection or perform new proxy load balancing.
3. nginx Server Load balancer instance 2
Requests are allocated based on the hash result of the access URL so that each URL is directed to the same backend server. The backend server is effective when caching and can be used to increase the Squid cache hit rate.
Simple Load Balancing instance:
# Vi nginx. conf // core configuration of the nginx main configuration file

  1. ..........
  2. # Loadblance my.linuxtone.org
  3. Upstream my.linuxtone.org {
  4. Ip_hash;
  5. Server 127.0.0.1: 8080;
  6. Server 192.168.169.136: 8080;
  7. Server 219.101.75.138: 8080;
  8. Server 192.168.169.117;
  9. Server 192.168.169.118;
  10. Server 192.168.169.119;
  11. }
  12. ..............
  13. Include vhosts/linuxtone_lb.conf;
  14. .........
  15. # Vi proxy. conf
  16. Proxy_redirect off;
  17. Proxy_set_header host $ host;
  18. Proxy_set_header X-real-IP $ remote_addr;
  19. Proxy_set_header X-forwarded-for $ proxy_add_x_forwarded_for;
  20. Client_max_body_size 50 m;
  21. Client_body_buffer_size 256 K;
  22. Proxy_connect_timeout 30;
  23. Proxy_send_timeout 30;
  24. Proxy_read_timeout 60;
  25. Proxy_buffer_size 4 K;
  26. Proxy_buffers 4 32 K;
  27. Proxy_busy_buffers_size 64 K;
  28. Proxy_temp_file_write_size 64 K;
  29. Proxy_next_upstream error timeout invalid_header http_500 http_503 http_404;
  30. Proxy_max_temp_file_size 128 m;
  31. Proxy_store on;
  32. Proxy_store_access User: RW group: RW all: R;
  33. # Nginx Cache
  34. # Client_body_temp_path/data/nginx_cache/client_body 1 2;
  35. Proxy_temp_path/data/nginx_cache/proxy_temp 1 2;

Copy code

# Vi linuxtone_lb.conf

  1. Server
  2. {
  3. Listen 80;
  4. SERVER_NAME my.linuxtone.org;
  5. Index index. php;
  6. Root/data/www/wwwroot/mylinuxtone;
  7. If (-F $ request_filename ){
  8. Break;
  9. }
  10. If (-F $ request_filename/index. php ){
  11. Rewrite (. *) $1/index. php break;
  12. }
  13. Error_page 403 http://my.linuxtone.org/member.php? M = user & A = login;
  14. Location /{
  15. If (! -E $ request_filename ){
  16. Proxy_pass http://my.linuxtone.org;
  17. Break;
  18. }
  19. Include/usr/local/nginx/CONF/Proxy. conf;
  20. }
  21. }

Copy code


13. Simple nginx Optimization

1. Reduce the size of nginx compiled files (reduce file size of nginx)
The default nginx compilation option uses the debug mode (-g) (many tracing and assert will be inserted in the debug mode). After compilation, an nginx has several megabytes. Remove nginx debug mode compilation, with only several hundred kb after compilation
In auto/CC/GCC, the last few lines are:
# Debug

  1. Cflags = "$ cflags-g"

Copy code

Comment out or delete these lines and recompile them.
2. Modify nginx header to disguise the server
1) Modify nginx. h

  1. # Vi nginx-0.7.30/src/CORE/nginx. h
  2. # Define nginx version "1.8"
  3. # Define nginx_ver "ltws/" nginx_version
  4. # Define nginx_var "nginx"
  5. # Define ngx_oldpid_ext ". oldbin"

Copy code

2) Modify nginx_http_header_filter_module
# Vi nginx-0.7.30/src/HTTP/ngx_http_header_filter_module.c
Set

  1. Static char ngx_http_server_string [] = "server: nginx" CRLF;

Copy code

Change

  1. Static char ngx_http_server_string [] = "server: ltws" CRLF;

Copy code

A) Modify nginx_http_header_filter_module
# Vi nginx-0.7.30/src/HTTP/ngx_http_special_response.c
As follows:

  1. Static u_char ngx_http_error_full_tail [] =
  2. "<HR> <center>" nginx_ver "</center>" CRLF
  3. "</Body>" CRLF
  4. "</Html>" CRLF
  5. ;

Copy code

  1. Static u_char ngx_http_error_tail [] =
  2. "<HR> <center> nginx </center>" CRLF
  3. "</Body>" CRLF
  4. "</Html>" CRLF
  5. ;

Copy code

To:

  1. Static u_char ngx_http_error_full_tail [] =
  2. "<Center>" nginx_ver "</center>" CRLF
  3. "<HR> <center> http://www.linuxtone.org </center>" CRLF
  4. "</Body>" CRLF
  5. "</Html>" CRLF
  6. ;
  7. Static u_char ngx_http_error_tail [] =
  8. "<HR> <center> ltws </center>" CRLF
  9. "</Body>" CRLF
  10. "</Html>" CRLF
  11. ;

Copy code

After modification, recompile the environment,
404 error (if no error page is specified ):

Use the curl command to view the server Header

Download(3.02 KB)

3. Specify CPU type compilation Optimization for a specific CPU.
By default, the GCC compilation parameter used by nginx is-o.
You can use the following two parameters for better optimization:
-- With-CC-opt = '-O3'
-- With-CPU-opt = opteron
This allows compilation to be optimized for a specific CPU and GCC.
This method only improves the performance and does not greatly improve the performance for your reference.
Cpud type: # Cat/proc/cpuinfo | grep "model name"
Compile Optimization Parameter reference: http://en.gentoo-wiki.com/wiki/Safe_Cflags

4. tcmalloc optimizes nginx Performance

  1. # Wget http://download.savannah.gnu.org/releases/libunwind/libunwind-0.99-alpha.tar.gz
  2. # Tar zxvf libunwind-0.99-alpha.tar.gz
  3. # Cd libunwind-0.99-alpha/
  4. # Cflags =-FPIC./configure
  5. # Make cflags =-FPIC
  6. # Make cflags =-FPIC install
  7. # Wget http://google-perftools.googlecode.com/files/google-perftools-0.98.tar.gz
  8. # Tar zxvf google-perftools-0.98.tar.gz
  9. # Cd google-perftools-0.98/
  10. #./Configure
  11. # Make & make install
  12. # Echo "/usr/local/lib">/etc/lD. So. conf. d/usr_local_lib.conf
  13. # Ldconfig
  14. # Lsof-N | grep tcmalloc

Copy code

Compile nginx to load google_perftools_module:
./Configure -- with-google_perftools_module
Add nginx. conf to the main configuration file:
Google_perftools_profiles/path/to/profile;
5. Kernel Parameter Optimization
# Vi/etc/sysctl. conf # Add the following content at the end:

  1. Net. ipv4.tcp _ fin_timeout = 30
  2. Net. ipv4.tcp _ keepalive_time = 300
  3. Net. ipv4.tcp _ syncookies = 1
  4. Net. ipv4.tcp _ tw_reuse = 1
  5. Net. ipv4.tcp _ tw_recycle = 1
  6. Net. ipv4.ip _ local_port_range = 5000 65000

Copy code

# Make the configuration take effect immediately
/Sbin/sysctl-P
14. How to build a high-performance Lemp
See http://www.linuxtone.org/lemp/lemp.pdf
1. Complete configuration script download: http://www.linuxtone.org/lemp/scripts.tar.gz
2. provides common nginx configuration examples (virtual host, anti-leeching, rewrite, access control, and Server Load balancer)
Discuz-related program static and so on), you just need to slightly modify the online application. 3. Replace the original xcache with EA, and provide related simple tuning scripts and configuration files.
For more information and updates, please note: http://www.linuxtone.org
15. nginx monitoring
1. rrdtool + Perl script drawing monitoring
Install rrdtool first. This article does not introduce rrdtool. For details about the installation, refer to the linuxtone Monitoring Section.
# Cd/usr/local/sbnin
# Wget http://blog.kovyrin.net/files/mrtg/rrd_nginx.pl.txt
# Mv rrd_nginx.pl.txt rrd_nginx.pl
# Chmod A + x rrd_nginx.pl
# Vi rrd_nginx.pl // configure the path in the script file
#! /Usr/bin/perl
Use rrds;
Use lwp: useragent;
# Define location of rrdtool Databases
My $ RRD = '/data/www/wwwroot/nginx/RRD ';
# Define location of images
My $ IMG = '/data/www/wwwroot/nginx/html ';
# Define your nginx stats URL
My $ url = "http: // 219.232.244.13/nginx_status ";
............
[Note] modify the corresponding path according to your specific situation.
# Crontab-E // Add the following
* ***/Usr/local/sbin/rrd_nginx.pl
After crond is restarted, the/data/www/wwwroot/nginx/html directory is specified by configuring the nginx virtual host. Many images are generated by automatically executing the Perl script through crond.
Http: // xxx/connections-day.png to see the server status diagram.
2. Official nginx-RRD Monitoring Service (Multi-VM) (recommended)
Web: http://www.nginx.eu/nginx-rrd.html
This solution is actually an improvement and enhancement based on the above monitoring solution. We also installed the rrdtool and the corresponding Perl module to perform the following operations:
# Yum install Perl-HTML *
Create Inventory and Image Storage records first

  1. # Mkdir-P/data/www/wwwroot/nginx/{RRD, HTML}
  2. # Cd/usr/local/sbin
  3. # Wget http://www.nginx.eu/nginx-rrd/nginx-rrd-0.1.4.tgz
  4. # Tar zxvf nginx-rrd-0.1.4.tgz
  5. # Cd nginx-rrd-0.1.4
  6. # CD etc/
  7. # Cp nginx-rrd.conf/etc
  8. # CD etc/cron. d
  9. # Cp nginx-rrd.cron/etc/cron. d
  10. # Cd/usr/local/src/nginx-rrd-0.1.4/html
  11. # Cp index. php/data/www/wwwroot/nginx/html/
  12. # Cd/usr/local/src/nginx-rrd-0.1.4/usr/sbin
  13. # Cp */usr/sbin/

Copy code

# Vi/etc/nginx-rrd.conf

  1. ######################################## #############
  2. #
  3. # Dir where RRD databases are stored
  4. Rrd_dir = "/data/www/wwwroot/nginx/RRD ";
  5. # Dir where PNG images are presented
  6. Www_dir = "/data/www/wwwroot/nginx/html ";
  7. # Process nice level
  8. Nice_level = "-19 ";
  9. # Bin dir
  10. Bin_dir = "/usr/sbin ";
  11. # Servers to test
  12. # Server_utl; SERVER_NAME
  13. Servers_url = "http: // 219.32.205.13/nginx_status; 219.32.205.13 http://www.linuxtone.org/nginx_status;www.linuxtone.org ""

Copy code

// Make adjustments based on your actual situation.
Severs_url format: http: // domain1/nginx_status; domain1 http: // domain2/nginx_status; domain2
This format monitors the connection status of multiple virtual hosts:
Focus on the crond service and access it through http: // 219.32.205.13/nginx/html. The configuration process is simple!
3. cacti template monitoring nginx
Use the nginx_status status to draw a picture for cacti monitoring
Http_stub_status_module is allowed during nginx Compilation
# Vi/usr/local/nginx/CONF/nginx. conf

  1. Location/nginx_status {
  2. Stub_status on;
  3. Access_log off;
  4. Allow 192.168.1.37;
  5. Deny all;
  6. }

Copy code

  1. # Kill-HUP 'cat/usr/local/nginx/logs/nginx. pid'
  2. # Wget http://forums.cacti.net/download.php? Id = 12676
  3. # Tar xvfz cacti-nginx.tar.gz
  4. # Cp cacti-nginx/get_nginx_socket_status.pl/data/cacti/scripts/
  5. # Cp cacti-nginx/get_nginx_clients_status.pl/data/cacti/scripts/
  6. # Chmod 755/data/cacti/scripts/get_nginx *

Copy code

Detection plugin

  1. #/Data/cacti/scripts/get_nginx_clients_status.pl http: // 192.168.1.37/nginx_status

Copy code

Import on cacti Console
Cacti_graph_template_nginx_clients_stat.xml
Cacti_graph_template_nginx_sockets_stat.xml
16. troubleshooting of common problems and errors
1. 400 bad request error causes and solutions
The configuration of nginx. conf is as follows.
Client_header_buffer_size 16 K;
Large_client_header_buffers 4 64 K;
You can adjust the value according to the actual situation.
2. nginx 502 Bad Gateway error
Proxy_next_upstream error timeout invalid_header http_500 http_503;
Or try setting:
Large_client_header_buffers 4 32 K;
3. The 413 Request Entity too large error in nginx
This error occurs during file upload,
Edit the nginx main configuration file nginx. conf, locate the HTTP {} section, and add
Client_max_body_size 10 m; // set the size based on your needs.
If PHP is run, the client_max_body_size must be the same as the maximum value of the following value in PHP. ini or slightly larger, so that no error occurs due to inconsistent data size.
Post_max_size = 10 m
Upload_max_filesize = 2 m
4. Resolve 504 gateway time-out (nginx)
This problem was encountered when upgrading the discuz forum.
In general, this situation may be caused by the small buffer of nginx's default FastCGI process response, which will cause the FastCGI process to be suspended. If your FastCGI service does not handle this suspension well, in the end, 504 gateway time-out may occur.
Today's websites, especially some forums, have a lot of replies and a lot of content. A page may even have hundreds of kb.
The default FastCGI Process Response Buffer is 8 KB. We can set a large value.
In nginx. conf, add: fastcgi_buffers 8 128 K
This indicates that the FastCGI buffer is set to 8*128 K.
Of course, if you are performing an immediate operation, you may need to increase the value of the nginx timeout parameter, for example, set it to 60 seconds: send_timeout 60;
Only after adjusting these two parameters, the result is that no timeout is displayed. It can be said that the effect is good, but it may also be due to other reasons. Currently, there are not many information about nginx, many things require long-term experience to produce results. We look forward to your findings!
5. How to Use nginx proxy
A friend runs Tomcat on a server with port 8080, IP Address: 192.168.1.2: 8080, and IP address of another machine: 192.168.1.8. A friend wants to access the Tomcat service by accessing http: // 192.168.1.8. The configuration is as follows:
Configure nginx. conf in 192.168.1.8 as follows:

  1. Server {
  2. Listen 80;
  3. SERVER_NAME java.linuxtone.org
  4. Location /{
  5. Proxy_pass http: // 192.168.1.2: 8080;
  6. Include/usr/local/nginx/CONF/Proxy. conf;
  7. }
  8. }

Copy code

6. How to disable nginx log
Access_log/dev/NULL; error_log/dev/NULL;

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.