Nginx, the thing.

Source: Internet
Author: User
Tags epoll imap sendfile

This article for my study of Nginx notes and experience, if there are errors or inappropriate places, but also hope to point out

1 Basic Concepts 1.1 forward proxy and reverse proxy

forward proxy :

In general, we say that the agent technology is to correct the agent technology.
Using forward proxy technology is generally used to access servers that we cannot access. The forward proxy server is between the user and the target server, such as User A wants to access target server B, but for various reasons cannot be directly accessed, you can use forward Proxy Server C, user A to Proxy server C send a request and specify the target server B, The proxy server forwards the request to B and returns the obtained result to user A.
Using a forward proxy often requires the customer to make the relevant configuration. A typical example is to provide access to the Internet for local area network users inside the firewall, and some companies have internal LANs that do not have access to the Internet itself, but can be accessed by setting up proxies. There is another example of "flipping the wall".
Reverse Proxy
Opposite to the forward proxy technology is the reverse proxy technology.
When using a reverse proxy, the client user does not need any settings, and the user always thinks that the proxy server B that he accesses is the target server, the user sends the request to the reverse proxy server, the reverse proxy server determines which target server the request should be forwarded to, and then returns the result to the user.

Compare
By contrast, we can have a clearer understanding.
A. Who's Inside the wall ?
For the use of the forward proxy environment, the user is located in the wall, the wall limits the user's ability to communicate with the outside world, we use forward proxy technology to give users the ability to access the external environment.
For environments that use reverse proxies, the target server is inside a wall with a door, but only to the reverse proxy server. No user could have access to the target server, but the presence of the reverse proxy server made it possible for the user to directly access the reverse proxy server.

B. Who the user is visiting
For a forward proxy, the user is accessing the target server. For example, when using the proxy LAN user access to the IPA, he is actually the request to the IPA, but this request is actually sent to the forward proxy server, and then by the proxy server to do a forwarding, the user needs to tell the proxy server requested content and the target of the request.

For the reverse proxy, the user is accessing the reverse proxy server. First take Nginx For example, when we use Nginx to do load balancing, a nginx behind is more than one target server, such as Nginx address is IPA, the destination server address is IPB1,IPB2,IPB3. The user actually requests an IPA, and then the reverse proxy server decides which target server to forward the request to.

C. Whether the user needs to configure
For a forward proxy, the user needs to be configured accordingly.
For reverse proxies, the user does not need to be configured, but instead needs to configure the target server with which the reverse proxy server is associated.

1.2 Nginx

What is it?
Nginx is a high-performance Web and reverse proxy server, and it also supports IMAP/POP3/SMTP agents.
do what?
Nginx is mainly used to do reverse proxy and load balancing.

2 nginx2.1 Installation

As a general rule, you can install it directly using the following command

sudo apt-get install nginx

After installation, start the Nginx test, using the following two kinds of commands can be started

sudo /etc/init.d/nginx start

Or

sudo service nginx start

After startup in the browser address bar input: http://localhost to confirm, if enter the Nginx welcome interface, indicating that Nginx has been installed and started successfully

2.2 Commands
Command function
sudo service nginx start Start Nginx
sudo /etc/init.d/nginx start Start Nginx
sudo service nginx stop Stop Nginx
sudo /etc/init.d/nginx stop Stop Nginx
sudo service nginx restart Re-start Nginx
sudo /etc/init.d/nginx restart Re-start Nginx
3 Nginx process and operating mechanism 3.1 master & Worker

In most cases nginx is performed in a multi-process manner (as if a single process mode is also supported: Nginx single-process mode)
After Nginx is started, a master process and multiple worker processes are generated in the background. Master and worker are the relationships between the main process and the child process.
The master process is similar to a supervisor, while worker processes are similar to workers. The master process is primarily responsible for managing worker processes.
The work of the master process includes:
1. Receive external signals and send signals to individual worker processes.
2. Monitor the health status of the worker process (worker process) and automatically restart the new worker process when the worker process exits abnormally.
The worker process works as:
Basic network events are performed by worker processes, and multiple workers compete for client requests, and the competition is fair, and each worker process has an equal chance of acquiring client requests. A request can only be processed by one worker process, and one worker cannot handle requests processed by other processes.

3.2 Specific processes

First look at the fork function of Linux:
The role of fork is to create child processes, and the child processes that fork creates basically replicate all the information of the main process, which means that the child processes inherit all the properties of the main process.
For Nginx, the master process first establishes the desired socket, and then fork generates the child process worker (the number of worker processes is generally consistent with the number of cores of the machine CPU), at which point each worker process inherits the properties of the master process. Including socket, of course, this socket is not the same, each process has its own socket, but will listen to the same IP and port. When a connection request occurs, each worker process receives a notification because each worker process is fair on the fetch request. In order to ensure that a request is handled by only one process, Nginx uses the mutex Accept_mutex, all worker processes compete for mutexes first, and only the process that grabs the mutex can handle the request. The process by which a worker process processes requests involves acquiring a connection, reading a request, parsing a request, processing a request, and disconnecting.

3.3 Synchronous & Asynchronous blocking & non-blocking

By 3.2 We know that a worker process can handle only one request, and the number of worker processes is limited, so how can nginx handle high concurrency ?
First, let's look at Synchronous & asynchronous blocking & non-blocking
Accidentally on the internet to see an example, very vivid, to share with you:

Story background: The old king next door boiled water
Appearance characters: Lao Wang, Ordinary kettle, Horn kettle (the water will ring after boiling)
First day:
Lao Wang put the ordinary kettle on the fire, staring at the kettle and waiting for the water to open. ( synchronous blocking )
Lao Wang thought it was a waste of time.
Day two:
Lao Wang put the ordinary kettle on the fire, go to the living room to watch TV, sometimes to see if the water is open ( synchronous non-blocking )
Lao Wang thought it was too much trouble.
Day Three:
Lao Wang set the kettle on fire and stared at the kettle for water to open the Horn ( asynchronous blocking )
Lao Wang thinks he's a little silly.
Day Fourth:
The old king set the kettle on fire, went to the living room to watch TV, and then went to get the kettle when it rang ( asynchronous non-blocking )

In this story,
Synchronous & Asynchronous is for the kettle, the ordinary kettle is synchronous, the sound kettle is asynchronous. The so-called synchronization , it is necessary for the caller to actively wait for the result of the call, asynchronous is the call occurs, the caller will not immediately get the result of the call, but the caller after processing the results of the notification to him.
Blocking & non-blocking is said for old Cuhk. Silly old Wang is blocked, watching TV old King is non-blocking. Blocking & non-blocking is concerned with the state of the program while it waits for the call result (message, return value) . A blocking call is when the current thread is suspended before the call results are returned. The calling thread will not return until the result is obtained.
A non-blocking call means that the call does not block the current thread until the result is not immediately available. So the old king of the block can only wait for the water to open, but the old king who is not obstructed may watch TV before the water is opened.

Nginx uses asynchronous non-blocking , such as Nginx will be in a thread to handle all IOQ requests (IO multiplexing technology), when an IO operation a begins, Nginx will not wait for the IO operation to complete, but will continue to process the IO operation B, Until the completion of this IO operation A has been notified before proceeding with the next step. So nginx can handle high concurrent requests.

4 Nginx Configuration

Use nginx -t the command to view the location of the Nginx configuration file.
First look at the default configuration:

User Www-data;worker_processes Auto;pid/Run/nginx.pid;events {worker_connections768;# multi_accept on;}http {##    # Basic Settings    ##Sendfile on; Tcp_nopush on; Tcp_nodelay on; Keepalive_timeout $; Types_hash_max_size2048;# server_tokens off;    # server_names_hash_bucket_size;    # server_name_in_redirect off;Include/etc/nginx/mime.types; Default_typeApplication/octet-stream;##    # SSL Settings    ##Ssl_protocols TLSv1 TLSv1. 1TLSv1. 2;# dropping SSLv3, Ref:poodleSsl_prefer_server_ciphers on;##    # Logging Settings    ##access_log/var/Log/nginx/access.Log; error_log/var/Log/nginx/Error.Log;##    # Gzip Settings    ##Gzip on; Gzip_disable"Msie6";# gzip_vary on;    # gzip_proxied any;    # Gzip_comp_level 6;    # gzip_buffers 8k;    # Gzip_http_version 1.1;    # gzip_types Text/plain text/css application/json application/javascript text/xml application/xml application/xml+ RSS Text/javascript;    ##    # Virtual Host configs    ##include/etc/nginx/conf.d/*.conf; include/etc/nginx/sites-enabled/*;}#mail {# # See Sample Authentication script at:# # Http://wiki.nginx.org/ImapAuthenticateWithApachePhpScript# # # Auth_http localhost/auth.php;# # Pop3_capabilities "TOP" "USER";# # Imap_capabilities "IMAP4rev1 " "Uidplus";# # server {# listen localhost:110;# protocol POP3;# Proxy on;#   }# # server {# listen localhost:143;# protocol IMAP;# Proxy on;#   }#}

The first thing to understand is that Nginx is based on a modular configuration, the configuration diagram is as follows:

4.1 Advanced Configuration

Where the top of the file is the Advanced configuration:

user www-data;worker_processes auto;pid /run/nginx.pid;

For user and PID, we should keep the default settings.
worker_processes represents the number of worker processes that are set to auto to match the number of available CPU cores.
In addition, the following configuration is used to set the limit on the number of open files for worker processes:

worker_rlimit_nofile 100000;

If it is not set, the value will be the same as the operating system limit, set a larger point to avoid the "too many open files" problem

4.2 Events Module

The events module contains settings for all processing connections in Nginx
The default configuration is:

events{    worker_connections 768;    # multi_accept on;}

worker_connections: Represents the maximum number of connections that a worker process can open at the same time.
multi_accept : Tell Nginx to accept as many connections as possible after receiving a new connection notification.
For the first time, the use property is also provided

useepoll;

Sets the polling method used to reuse the client thread. If you use Linux 2.6+, you should use Epoll. If you use *BSD, you should use Kqueue.

4.3 http Module

The HTTP module controls all the core features of Nginx HTTP processing.
Compare the configuration file above

Sendfile

The steps for the server to respond to an HTTP request are:
1. Read the disk file into the kernel buffer
2. Read from the kernel buffer to memory
3. Processing (static resources do not need to be processed)
4. Kernel buffers sent to the network card (send cache)
5. Network card sending data
Sendfile () using Linux can skip 2, 3 steps to copy the data between the disk and the socket. The Sendfile property is what makes the Sendfile () function work.

Tcp_nopush

This parameter indicates that all header files will be sent in one packet, rather than one send.

Tcp_nodelay

Tell Nginx not to cache data, but to send it over a period of time

Keepalive_timeout

Indicates a time-out that is linked to the client and will be disconnected after the set time has passed

Types_hash_max_size

Types_hash_bucket_size sets the amount of memory that each hash block occupies, types_hash_max_size the collision rate that affects the hash table. The larger the types_hash_max_size, the more memory is consumed, but the collision rate of the hash key decreases and the retrieval speed is faster. The smaller the types_hash_max_size, the less memory is consumed, but the collision rate of the hash key may rise

Server_tokens

Used to hide the Nginx version number in a page

Server_names_hash_bucket_size

This attribute resolves the problem of multiple domain names for virtual hosts, and when multiple virtual hosts are configured, the attribute must be configured and the appropriate value is too large, in multiples of 32.

Server_name_in_redirect

Nginx redirection rules.
When the URL points to a directory and does not contain "/" at the end, Nginx will automatically make a 301 redirect, and there are two situations:
1, Server_name_in_redirect on (default), URL redirection: server_name in the first domain + directory name +/;
2, Server_name_in_redirect Off,url Redirect to: The original URL in the domain name + directory name +/.

Client_header_timeout

To set the time-out for the request header

Client_body_timeout

Sets the time-out period for the request body.

The above is the base setting for the Nginx HTTP module.

Access_log

Sets whether Nginx will store access logs. Turn off this option to make the read disk IO operation faster (Aka,yolo).

Error_log

Set to log only critical errors.

Reset_timeout_connection

When set to on, unresponsive client links will be turned off

Send_timeout

The response time-out for the client.

The above is the Nginx about log configuration.

Gzip

Tells Nginx to send data in the form of gzip compression. This will reduce the amount of data we send.

Gzip_disable

Disables the gzip feature for the specified client. We set it to IE6 or lower to make our solution broadly compatible.

Gzip_static

Tell Nginx to find out if there are pre-gzip-processed resources before compressing the resource. This requires you to pre-compress your files (commented out in this example), allowing you to use the maximum compression ratio so that Nginx does not have to compress the files (for more detailed gzip_static information, please click here).

Gzip_proxied

Allows or suppresses compression of response flows based on requests and responses. We set it to any, which means that all requests will be compressed.

Gzip_min_length

Sets the minimum number of bytes to enable compression on the data. If a request is less than 1000 bytes, we'd better not compress it, because compressing these small data reduces the speed of all processes that handle this request.

Gzip_comp_level

Sets the compression level of the data. This level can be any number between 1-9, 9 is the slowest but the maximum compression ratio. We set it to 4, which is a more eclectic setting.

Gzip_type

Set the data format that you want to compress. There are already some of the above examples, and you can add more formats.

4.4 Load Configuration

Add the following code to the HTTP module:

Upstream Proxy_Set{Server10.0.2.16: theweight=1; Server10.0.2.17: theweight=1; }server{Listen the; server_name localhost127.0.0.1; location/{Proxy_pass Http://proxy_SetProxy_Set_header Host$host; Proxy_Set_header x-real_ip$remote _addr; Proxy_Set_header x-forwarded-for$proxy _add_x_forwarded_for;#client_max_body_size 10m;}}

First, the HTTP node under the new upstream node, named Proxy_set, the name can be specified, and in the node configuration back-end server Ip:port and weight, the weight can be omitted.
Then configure the server node, listen is the listening port, server_name is the server address, multiple addresses are separated by a space, I set up here two, respectively, localhost and 127.0.0.1, and then configure the server's location node, Proxy_pass to the name of the Http://+upstream, the other configuration is consistent (Client_max_body_size to set the file upload size limit, nginx default value is 1M).
Restart the Nginx service after the configuration is complete.
Enter the LOCALHOST:80/project name or 127.0.0.1:80/project name in the browser address bar to access it.

Nginx, the thing.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.