Nginx configuration file

Source: Internet
Author: User

Transfer from http://www.jb51.net/article/61137.htm This article mainly introduces the Nginx configuration Getting Started tutorial, this article explains the reverse proxy concept, initial configuration, advanced configuration, load Balancing configuration and other content, the need for friends can refer to the next

Basic concepts

The most common use of Nginx is to provide reverse proxy service, then what reverse proxy? The agency believes that many mainland compatriots have used this magical land, and the principles are roughly as follows:

The proxy server accepts requests as intermediaries on the client side, hides real customers, and obtains resources from the server. If the proxy server outside the Great Wall can also help us to achieve the purpose of crossing the Great Wall. And the reverse proxy as the name implies is in turn proxy server as the intermediary of the server, hiding out the real service server, the principle is roughly like:

This is certainly not to achieve the Great Wall, but to achieve a range of functions such as security and load balancing. The so-called security refers to the client's request does not fall directly into the intranet server but through the agent made a layer of forwarding, in this layer can be implemented security filtering, flow control, anti-DDOS and a series of strategies. While load balancing refers to the number of servers that we can horizontally scale back-end to actually provide services, the agent forwards requests to each server in a regular order, so that the load on each server is nearly balanced.

And Nginx is the current popular such a reverse proxy service.

Under Ubuntu, you can build and install the process, directly apt-get

Copy CodeThe code is as follows:
sudo apt-get install Nginx
After installation, you can directly pass:
Copy CodeThe code is as follows:
sudo service nginx start
To start the Nginx service, Nginx set the default 80 port forwarding, we can again browser access to Http://locallhost to check.

Initial configuration

The default configuration file for Nginx is located in

Copy CodeThe code is as follows:
/etc/nginx/nginx.conf
The best way to learn to configure, is to start with examples, we do not look at the other configuration, directly to the Nginx default page related configuration. There is a line in the configuration file:
Copy CodeThe code is as follows:
include/etc/nginx/sites-enabled/*;
This line loads an external configuration file, the Sites-enabled folder has only one default file, this external configuration file is responsible for our Nginx default proxy. After you have shrunk the contents of the configuration, get the following lines:

Copy CodeThe code is as follows:
server {
server_name localhost;
Listen default_server;
Listen [::]:80 default_server Ipv6only=on;

root/usr/share/nginx/html;
Index index.html index.htm;

Location/{
Try_files $uri $uri/= 404;
}
}

A large site usually has a lot of subordinate sites, have their own servers to provide the corresponding services, in Nginx we can be called Virtual host concept to isolate these different service configuration, which is the meaning of the server in the above configuration. For example, Google has a translation and academic two products, we can configure the Nginx configuration file two server,servername respectively for Translate.google.com and Scholar.google.com, so that different URL requests will be corresponding to the nginx corresponding settings, forwarded to a different back-end server. The servername here is matched to the host row in the client HTTP request.

In this case, server_name is localhost, which is why we can access the configuration of the page in the browser via localhost. The following two listen respectively corresponding to the IPv4 and IPv6 under the listening port if set to 8080, then we can only through the localhost:8080 to visit the default page.

The implication of Default_server is that if there is no setting in Nginx for other HTTP requests, the configuration of this server is used to handle it. For example, if we go to visit 127.0.0.1, then it will fall to this server to deal with.

Each URL request will correspond to a service, nginx processing forwarding or a local file path, or a service path to another server. And this path is matched by location. We can configure the server as a domain name, and location is configured with a more granular path under a domain name.

Here the location match/start all requests, namely localhost under the/xxx or the/yyy to go to the following configuration, in addition to this simple rough match, Nginx also supports regular and complete equality and other fine matching methods. and tryfiles means that Nginx will follow the next order to access the file, will be the first match to return. For example, you go to request localhost/test, he will go to find/test file, can't find again to find/test/can not find return a 404. In addition, we can use Proxypass to implement reverse proxy and load balancing in the location configuration, but this simplest configuration does not involve

Where Root refers to a local folder as the root path for all URL requests. For example, if the user requests a localhost/test, then Nginx will need to find the/usr/share/nginx/html folder to return the test file.

And index is the default access page, when we visit localhost, he will automatically in order to find the root file path under the index.html and index.htm will be the first found results returned.

Location Advanced Configuration
The above configuration just maps the user's URL to a local file, and does not implement the legendary reverse proxy and load balancer (of course, the Nginx do the static file distribution is also thought of the bad), we will further configure the location to see how to achieve.

It's easy to configure. For example, I want to transfer all the requests to the 8080 port of a machine that really provides the service, as long as this:

Copy CodeThe code is as follows:
Location/{
Proxy_pass 123.34.56.67:8080;
}
All requests are then reversed to 123.34.56.67. This way our reverse proxy function is implemented, but can be proxied to a server which has what load balance ah? This will use the Nginx upstream module.
Copy CodeThe code is as follows:
Upstream Backend {
Ip_hash;
Server backend1.example.com;
Server backend2.example.com;
Server backend3.example.com;
Server backend4.example.com;
}
Location/{
Proxy_pass Http://backend;
}
We specified a set of machines in the upstream and named the group backend so that in Proxypass, as soon as the request was transferred to backend, we implemented the reverse proxy load balancing on the four machines. One of the iphash indicates that our balanced approach is distributed according to the user's IP address.

For the configuration to take effect, we do not need to restart Nginx just reload configuration.

Copy CodeThe code is as follows:
sudo service Nginx Reload

Summarize

These are the simplest configurations for static file forwarding, reverse proxy, and load balancing via Nginx. All of the functions in Nginx are implemented through modules, such as when we configure upstream to upstream modules, while server and location are in the HTTP core module, the other is the LIMT module of the flow control, the Mail module , an HTTPS SSL module. Their configurations are similar to those that can be found in the Nginx module documentation for detailed configuration instructions.

Nginx configuration file

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.