In production environment, Nginx does both Web service and reverse proxy.

Source: Internet
Author: User
Tags nginx server

First, write about the first into the blog park feeling

As we all know, Nginx is a high-performance HTTP and reverse proxy server, in the previous work, either implement HTTP or reverse proxy or load balancing. The same nginx or cluster has not yet implemented both HTTP and reverse proxy.

So can nginx implement both HTTP and reverse proxy?

The answer is yes.

Just a moment ago in the actual project there is a similar situation, and share with the people, because previously did not write blog habits, only used to their own record operation steps and stability, in the past often encountered problems are Baidu, Google, the Friends of the blog to help themselves see solve a lot of problems.

This shows the importance of technology sharing, fortunately, the self-feeling for Linux has a certain understanding, is currently working on the previous work of the document collation, will be gradually in the blog post to share with you.

Gossip less says:

Here's how to implement both HTTP and reverse proxy on nginx.

II. Overview of the environment:

Because it is a production environment and there are certain secrets involved in the IP address will do some processing:

  

Describes extranet access through the firewall NAT load Balancer address access to Nginx Web server access to the main business, some services need to have Nginx proxy to the intranet Web server, because the intranet has a firewall and load balancing, and the Network load balancer is a firewall NAT an address to provide services, So you can ignore the intranet IP address, the following is only for the Nginx server manipulation. Nginx proxy address is xxxxx (due to the production environment and therefore the hidden address.) )

Third, the actual configuration:

This article only describes nginx to provide both Web and proxy services from this installation Nginx ignored.

1. Configure the Web server.

In the Nginx configuration file directory and the Nginx path under the Conf file nginx main configuration file is actually quite simple mainly configured as follows:

nginx.conf file

  
#user Root;
Worker_processes 4; #采用worker进程模式默认1, this is set to 4, according to the number of CPU of the server set, the maximum is not more than twice times the number of CPUs

Error_log Logs/error.log;
#error_log Logs/error.log Notice;
#error_log Logs/error.log Info;

#pid Logs/nginx.pid;

Events {
Worker_connections 1024; #在events下其实还有epoll模型不过在此nginx版本中默认就是epoll
}


HTTP {
Include Mime.types;
Default_type Application/octet-stream;

Load_iguard/usr/local/iguard/syncserver/mod_nginx/libigx.so/usr/local/iguard/syncserver/mod_nginx/mod_ iguard3.conf;
Enable_iguard on;

Log_format Main ' $remote _addr-$remote _user [$time _local] "$request" '
' $status $body _bytes_sent ' $http _referer '
' "$http _user_agent" "$http _x_forwarded_for";
Access_log Logs/access.log Main;
Sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
Keepalive_timeout 65;
include extra/upstream.conf; #这里是关键在conf目录下创建一个upstream the. conf file to set up the Web Service site and the reverse proxy, all configurations must be in the master configuration file, in order to make the configuration more hierarchical there will be a split configured here.  
#gzip on;
}

2, in the Conf directory of extra/upstream.conf; The default is that no such file must be created manually;

    First CD to conf/(according to the actual situation conf in Nginx installation path, such as Nginx installed under/usr/local so conf file under the/usr/local/nginx/conf) below

    To install in Nginx in/usr/local for example:

Cd/usr/local/nginx/conf

mkdir Extra

CD extra

Vim upstream.conf or VI upstream.conf the following configuration is only part.

Upstream TRS {#给代理服务命名只是本地有效方便代理调用, TRS denotes a name and can actually take any name: Dick and Harry, for example.
Server xx.xx.xx.xx:8080; #真实服务器地址实际上是防火墙映射了内网负载均衡地址及端口. For Nginx to do load balancing also in this configuration only Gator

Server can be load balanced, Nginx load balancer default algorithm is WRR weight polling . Since there is already a firewall and load balancing in this intranet environment, it is only necessary to write the address and port that the firewall maps.

Load Balancing can refer to http://nginx.org/en/docs/http/ngx_http_upstream_module.html


} #这里只以一个需要反向大力的服务为例子, the real environment has 8 services. The configuration is actually the same.
server {
Listen 80; #nginx监听80端口用以提供web服务.
server_name localhost;
SSI on;
Ssi_silent_errors off;
Ssi_types text/shtml;
#charset Koi8-r;
Root/ucap/websites;

#access_log Logs/host.access.log Main;

Location /{
AutoIndex on;
root/xxx/websites;
index index.html index.htm index.shtml;
}

LOCATION/ZGCD {
AutoIndex on;
ALIAS/XXX/WEBSITES/ZGCD;
index index.html index.htm index.shtml;
}

LOCATION/QLSGZXXW {
AutoIndex on;
Alias/xxx/websites/qlsgzxxw;
index index.html index.htm index.shtml;
}

LOCATION/CDSTB {
AutoIndex on;
Alias/xxx/websites/cdstb;
index index.html index.htm index.shtml;
}

# # # #红色部分配置为提供web服务配置.
location/trsapp{#此配置为提供代理服务配置
Root/ucap/websites;
Proxy_pass Http://trs/trsapp;    

#这里调用的是upstream下的名字表示的是用一旦访问web站点下/trsapp and agents to Http://10.1.1.1:8080/trsapp


Proxy_redirect default;
Proxy_set_header remote-host $remote _addr;
Proxy_set_header x-forwarded-for $proxy _add_x_forwarded_for;
Proxy_set_header X-real-ip $remote _addr;
proxy_set_header Host $host: 8001; #这里是关键由于这个问题花费了一天时间才查出问题关键, there will be an agent failure problem will show later.
Proxy_set_header Cookie $http _cookie;
}
#server {
# Listen 81;
# server_name localhost;

#    #??
# location/{
# Proxy_pass http://23.202.1.211;
#      }
#    }

3, upstream configuration parameter description:

        

Proxy main options:

Proxy_set_header

Set up the backend server to obtain the user's host name or real IP address, and the real IP address of the agent .

Client_body_buffer_size

The user formulates the client request principal buffer size, which can be understood to be saved locally before being passed on to the user.

Proxy_connect_timeout

Represents the time-out period for server connections to the background, that is, the response timeout for initiating a handshake wait.

Proxy_send_timeout

Indicates the data return time of the backend server, that is, the backend server must pass all the data within the specified time, otherwise,nginx will disconnect the connection.

Proxy_read_timeout

Set nginx from the agent's back-end server to obtain information time, indicating that after the successful connection,nginx waits for the backend server response time, in fact , nginx Has entered the back-end queue waiting for processing time.

Proxy_buffer_size

Sets the buffer size, by default, which is equal to the size of the instruction proxy_buffer_size setting

Proxy_buffers

Sets the number and size of buffers , and the response information that Nginx obtains from the proxy's back-end server will place the buffer.

Proxy_busy_buffer_size

Setting the system busy is going to be able to use the proxy_buffers Size, the official recommendation is:proxy_buffers twice times

Proxy_temp_file_write_size

Specifies the size of the proxy cache temp file.

Refer to: http://liuyu.blog.51cto.com/183345/166381/

4, to proxy_set_header Host $host: 8001; and the problem caused by the failure
1) Failure recurrence later:

2) Workaround:

Four, the test realization function:

The web has been implemented to see the adoption of the/CDSTB in the Web configuration

Proxy function Implementation:

 

In production environment, Nginx does both Web service and reverse proxy.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.