1. Advantages and disadvantages of Apache server and nginx: We used Apache in large numbers as HTTPServer. Apache has excellent performance and provides a variety of functions through modules. 1) Apache first supports concurrent responses to the client. after running the httpd daemon process, it will generate multiple child processes/threads at the same time, each child process/thread responds to client requests separately; 2) In addition, Apache can provide static and dynamic services. for example, PHP parsing is implemented through a module that supports PHP instead of CGI with poor performance (usually mod_php5, or apxs2 ). 3) disadvantages: Therefore, Apache is usually called process-based Server, that is, the multi-process-based HTTPServer, because it needs to create a child process/thread for each user request to respond; This disadvantage is that if there are a lot of concurrent requests (which is common in large portal websites), a lot of threads are required to occupy a lot of CPU and memory resources. Therefore, concurrent processing is not an Apache strength. 4) solution: At present, another kind of WebServer has emerged, which is superior in terms of concurrency, called asynchronous servers asynchronous server. The most famous ones are Nginx and Lighttpd. The so-called asynchronous server is the event-driven in the event-driven mode. in addition to the user's concurrent requests, only one or several threads are required. Therefore, system resources are very small. These are also called lightweight web servers. For example, for 10,000 concurrent connection requests, nginx may only use a few MB of memory, while Apache may need several hundred MB of memory resources. 2. the actual usage is as follows: 1) We do not need to introduce the single use of Apache as an HTTPServer, which is a very common application; As mentioned above, Apache's support for PHP and other server scripts is achieved through its own modules, and the performance is superior. 2) We can also use nginx or lighttpd as HTTPServer alone. Similar to nginx, lighttpd, and Apache, you can use various modules to expand the functions of the server. you can also configure various options through the conf configuration file. For PHP, nginx and lighttpd do not have built-in modules to support PHP, but FastCGI. The Lighttpd module provides CGI, FastCGI, SCGI, and other services. Lighttpd is capable of automatically spawning FastCGI backends as well as using externally spawned processes. Nginx does not provide its own PHP processing function. it must use a third-party module to provide FastCGI integration for PHP.
----------------------------- Rewrites all non-http://bbs.it-home.org/access => http://bbs.it-home.org/ Server_name web90. ***. com;
If ($ host = "web90. ***. com "){ Rewrite ^ (. *) $ http://bbs.it-home.org/#1 permanent; }
--------------------------------- Nginx stop/smooth restart # p # Page title # e #
Nginx signal control
Quickly disable TERM and INT Close QUIT HUP restarts smoothly and reloads the configuration file. USR1 re-opens the log file, which has a large usage during log cutting. USR2 smooth upgrade executable program WINCH calmly closes the working process
1) stop with ease:
Kill-QUIT Nginx master process number
Kill-QUIT '/usr/local/webserver/nginx/logs/nginx. pid'
2) stop quickly:
Kill-TERM Nginx master process number
Kill-TERM '/usr/local/webserver/nginx/logs/nginx. pid'
Kill-INTN gsung master process number
Kill-INT '/usr/local/webserver/nginx/logs/nginx. pid'
3) force stop all nginx processes
Pkill-9 nginx
1) smooth restart
Kill-HUP nginx master process number
Kill-HUP '/usr/local/webserver/nginx/logs/nginx. pid'
----------------------------- Nginx. conf # P # Paging Title # e #
Worker_processes 8;
Number of work derivative processes
Generally equal to the total number of cpu cores or twice the total number of cores, for example, two quad-core CPUs, the total number of cores is 8
- Events
- {
- Use epoll; // network I/o model used. epoll is recommended for linux systems, and kqueue is recommended for freebsd systems.
- Worker_connections 65535; // number of allowed connections
- }
- Location ~ . *\. (Gif | jpg | jpeg | png | bmp | swf) $ {access_log off; disable log expires 30d; // use the expires command to output the Header to implement local cache, 30 days} location ~ . *\. (Js | css) $ {access_log off; disable log expires 1 h ;}======================== each
- {
- Access_log off; disable log
- Expires 30d; // use the expires command to output the Header for local cache, 30 days
- }
- Location ~ . * \. (Js | css) $
- {
- Access_log off; disable log
- Expires 1 h;
- }
================================ Daily scheduled nginx log script cutting
- Vim/usr/local/webserver/nginx/sbin/cut_nginx_log.sh
- #! /Bin/bash
- # This script run at 00:00
- # The Nginx logs path
- Logs_path = "/usr/local/webserver/nginx/logs /";
- Mkdir-p $ {logs_path} $ (date-d "yesterday" + "% Y")/$ (date-d "yesterday" + "% m ") /# p # Paging Title # e #
- Mv $ {logs_path} access. log $ {logs_path} $ (date-d "yesterday" + "% Y")/$ (date-d "yesterday" + "% m ") /access _ $ (date-d "yesterday" + "% Y % m % d "). log
- Kill-USR1 'cat/usr/local/webserver/nginx. pid'
- Chown-R www: www cut_nginx_log.sh
- Chmod + x cut_nginx_log.sh
- Crontab-e
- 00 ***/bin/bash/usr/local/webserver/nginx/sbin/cut_nginx_log.sh
- #/Sbin/service crond restart
-------------- Configure gzip compression for nginx
Generally, the size of the compressed html, css, js, php, jhtml, and other files can be reduced to 25%. that is to say, the size of the original KB html file is only 25 kB after compression. This can undoubtedly save a lot of bandwidth and reduce the server load. It is relatively simple to configure gzip in nginx
Generally, you only need to add the following configuration lines to the http segment of nginx. conf.
Reference Gzip on; Gzip_min_length 1000; Gzip_buffers 4 8 k; Gzip_types text/plain application/x-javascript text/css text/html application/xml;
Restart nginx You can use the gzip check tool to check whether gzip is enabled on the webpage. Http://gzip.zzbaike.com/
--------------- How to redirect an nginx error page
Error_page 404/404 .html;
The 404.html file must be in the html Directory of the nginx main directory. if you need to directly jump to another address after the 404 error occurs, you can directly set it as follows:
Error_page 404 http://bbs.it-home.org /;
Common errors such as 403 and 500 can be defined in the same way. # P # Paging Title # e #
The page size of the 404.html file must exceed 512 kB. Otherwise, it will be replaced by ie's default error page.
------------------------------ Virtual host configuration
- Server {
- Listen 80;
- Server_name localhost;
- Access_log/var/log/nginx/localhost. access. log;
- Location /{
- Root/var/www/nginx-default;
- Index. php index.html index.htm;
- }
- Location/doc {
- Root/usr/share;
- Autoindex on;
- Allow 127.0.0.1;
- Deny all;
- }
- Location/images {
- Root/usr/share;
- Autoindex on;
- }
- Location ~ \. Php $ {
- Fastcgi_pass 127.0.0.1: 9000;
- Fastcgi_index index. php;
- Fastcgi_param SCRIPT_FILENAME/var/www/nginx-default $ fastcgi_script_name;
- Include/etc/nginx/fastcgi_params;
- }
- }
- Server {
- Listen 80;
- Server_name sdsssdf.localhost.com;
- Access_log/var/log/nginx/localhost. access. log;
- Location /{
- Root/var/www/nginx-default/console;
- Index. php index.html index.htm;} location/doc {root/usr/share; autoindex on; allow 127.0.0.1; deny all;} location/images {root/usr/share; autoindex on ;} location ~ \. Php $ {fastcgi_pass 127.0.0.1: 9000; fastcgi_index index. p
- # P # Paging Title # e #
- }
- Location/doc {
- Root/usr/share;
- Autoindex on;
- Allow 127.0.0.1;
- Deny all;
- }
- Location/images {
- Root/usr/share;
- Autoindex on;
- }
- Location ~ \. Php $ {
- Fastcgi_pass 127.0.0.1: 9000;
- Fastcgi_index index. php;
- Fastcgi_param SCRIPT_FILENAME/var/www/nginx-default $ fastcgi_script_name;
- Include/etc/nginx/fastcgi_params;
- }
- }
---------------------- Monitoring
Location ~ ^/NginxStatus /{
Stub_status on; # Nginx status monitoring configuration }
In this way, the Nginx running information is monitored through http: // localhost/NginxStatus/(the last/cannot be dropped:
Active connections: 1 Server accepts handled requests 1 5 Reading: 0 Writing: 1 Waiting: 0
The content displayed in NginxStatus is as follows: # p # Page title # e #
Active connections-number of active connections currently being processed by Nginx. Server accepts handled requests -- 14553819 connections are processed in total, and 14553819 handshakes are created successfully (proving that there is no failure in the middle ), A total of 19239266 requests are processed (an average of 1.3 data requests are processed in each handshake ). Reading -- number of headers that nginx reads to the client. Writing -- number of headers that nginx returns to the client. Waiting -- when keep-alive is enabled, this value is equal to active-(reading + writing), meaning Nginx has processed the resident connection waiting for the next request command.
------------------------------- Static file processing
Through regular expressions, we can let Nginx identify various static files
Location ~ \. (Htm | html | gif | jpg | jpeg | png | bmp | ico | css | js | txt) $ {
Root/var/www/nginx-default/html; Access_log off; Expires 24 h; }
For examples, static HTML files, js script files, and css style files, we hope Nginx can directly process and return them to the browser, which can greatly speed up web browsing. Therefore, for such files, we need to use the root command to specify the file storage path. at the same time, because such files are not often modified, the expires command is used to control their cache in the browser, to reduce unnecessary requests. The expires command can Control the header labels of "Expires" and "Cache-Control" in the HTTP response (to Control the page Cache ). You can use the following format to write Expires:
Expires 1 January, 1970, 00:00:01 GMT; Expires 60 s; Expires 30 m; Expires 24 h; Expires 1d; Expires max; Expires off;
In this way, when you enter http: // 192.168.200.100/1.html, the system automatically jumps to var/www/nginx-default/html/1.html.
For example, all requests in the images path can be written as follows:
# P # Paging Title # e #
Location ~ ^/Images /{ Root/opt/webapp/images; }
------------------------ Dynamic page request processing [cluster]
Nginx does not support popular dynamic pages such as JSP, ASP, PHP, and PERL, but it can send requests to backend servers through reverse proxy, for example, Tomcat, Apache, and IIS can process dynamic page requests. In the preceding configuration example, after defining some static file requests that are directly processed by Nginx, all other requests are sent to the backend server through the proxy_pass command (Tomcat is used in the preceding example ). The simplest proxy_pass usage is as follows: Location/{proxy_pass http: // localhost: 8080; proxy_set_header X-Real-IP $ remote_addr;} the cluster is not used here, instead, send the request directly to the Tomcat service running on port 8080.
Proxy_pass http: // localhost: 8080; Proxy_set_header X-Real-IP $ remote_addr; }
Instead of using a cluster, we send requests directly to the Tomcat service running on port 8080 to process requests similar to JSP and Servlet.
When the page traffic is very large, multiple application servers are often required to perform operations on dynamic pages. in this case, we need to use the cluster architecture. Nginx defines a server cluster through the upstream command. in the previous complete example, we defined a cluster named tomcats, which contains six Tomcat services on three servers. The proxy_pass command is written as follows:
# Configuration information of all backend servers in the cluster Upstream Upload ATS { Server 192.168.0.11: 8080 weight = 10; Server 192.168.0.11: 8081 weight = 10; Server 192.168.0.12: 8080 weight = 10; Server 192.168.0.12: 8081 weight = 10; Server 192.168.0.13: 8080 weight = 10; Server 192.168.0.13: 8081 weight = 10; # p # Page title # e # } Location /{ Proxy_pass http: // invalid ATS; # reverse proxy Include proxy. conf; }
---------------------- Stress test
- Wget http://bbs.it-home.org//soft/linux/webbench/webbench-1.5.tar.gz
- Tar zxvf webbench-1.5.tar.gz
- Cd webbench-1.5
- Make & make install
- # Website- c 100-t 10 http: // 192.168.200.100/info. php
- Parameter description:-c indicates the number of concurrencies, and-t indicates the duration (seconds)
- Root @ ubuntu-desktop:/etc/nginx/sites-available # webdesk- c 100-t 10 http: // 192.168.200.100/info. php
- Webbench-Simple Web Benchmark 1.5
- Copyright (c) Radim Kolar 1997-2004, GPL Open Source Software.
- Benchmarking: GET http: // 192.168.200.100/info. php
- Clients, running 10 sec.
- Speed = 19032 pages/min, 18074373 bytes/sec.
- Requests: 3172 susceed, 0 failed.
------------------------------- PPC provides detailed nginx configuration instructions
- # Running user
- User nobody;
- # Start a process
- Worker_processes 2; # p # Page title # e #
- # Global error logs and PID files
- Error_log logs/error. log notice;
- Pid logs/nginx. pid;
- # Working mode and maximum number of connections
- Events {use epoll;
- Worker_connections 1024;} # sets the http server and uses its reverse proxy function to provide load balancing support
- Http {# set the mime type
- Include conf/mime. types;
- Default_type application/octet-stream;
- # Set the log format
- Log_format main '$ remote_addr-$ remote_user [$ time_local] ''" $ request "$ status $ bytes_sent'' "$ http_referer" "$ http_user_agent" ''" $ gzip_ratio "';
- Log_format download '$ remote_addr-$ remote_user [$ time_local] ''" $ request "$ status $ bytes_sent'' "$ http_referer" "$ http_user_agent" ''" $ http_range "$ sent_http_content_range "';
- # Set request buffer
- Client_header_buffer_size 1 k;
- Large_client_header_buffers 4 4 k;
-
- # Enable the gzip module
- Gzip on;
- Gzip_min_length 1100;
- Gzip_buffers 4 8 k;
- Gzip_types text/plain;
- Output_buffers 1 32 k;
- Post pone_output 1460;
-
- # Setting access log
- Access_log logs/access. log main;
- Client_header_timeout 3 m;
- Client_body_timeout 3 m;
- Send_timeout 3 m;
- Sendfile on;
- Tcp_nopush on;
- Tcp_nodelay on;
- Keepalive_timeout 65;
-
- # Set the server list of server load balancer
- Upstream mysvr {# The weigth parameter indicates the weight. the higher the weight, the higher the probability of being allocated.
- # Enable port 3128 for Squid on the local machine # p # Page title # e #
- Server 192.168.8.1: 3128 weight = 5;
- Server 192.168.8.2: 80 weight = 1;
- Server 192.168.8.3: 80 weight = 6;
- }
-
- # Set virtual hosts
- Server {listen 80;
- Server_name 192.168.8.1 www.okpython.com;
- Charset gb2312;
- # Set access logs for the current virtual host
- Access_log logs/www.yejr.com. access. log main;
- # If you access/img/*,/js/*,/css/* resources, you can directly retrieve the local file without passing squid
- # This method is not recommended if there are many files, because the squid cache works better.
- Location ~ ^/(Img | js | css )/{
- Root/data3/Html;
- Expires 24 h;
- } # Enable the server load balancer location/{proxy_pass http: // mysvr; proxy_redirect off; proxy_set_header Host $ host; proxy_set_header X-Real-IP $ remote_addr; proxy_set_header X-Forwarded-For $ proxy_add_x _
- # Enable server load balancer "/"
- Location /{
- Proxy_pass http: // mysvr;
- Proxy_redirect off;
- Proxy_set_header Host $ host;
- Proxy_set_header X-Real-IP $ remote_addr;
- Proxy_set_header X-Forwarded-For $ proxy_add_x_forwarded_for;
- Client_max_body_size 10 m;
- Client_body_buffer_size 128 k;
- Proxy_connect_timeout 90; # p # Page title # e #
- Proxy_send_timeout 90;
- Proxy_read_timeout 90;
- Proxy_buffer_size 4 k;
- Proxy_buffers 4 32 k;
- Proxy_busy_buffers_size 64 k;
- Proxy_temp_file_write_size 64 k;
- }
- # Set the address for viewing Nginx status
- Location/NginxStatus {
- Stub_status on;
- Access_log on;
- Auth_basic "NginxStatus ";
- Auth_basic_user_file conf/htpasswd; # The content of the conf/htpasswd file is generated using the htpasswd tool provided by apache.
- }
- }
- }
|