Nginx implementation of data forwarding and load balancing __nginx

Source: Internet
Author: User
Tags epoll sendfile
Nginx is a third-party open source is mainly used to do data forwarding, reverse proxy, load balancing software, currently in the Internet and the software industry a lot of use.        This blog mainly to achieve Nginx data forwarding and load balancing functions, the most important is the configuration of Nginx configuration files. The server used in this article is Aliyun Centos6.8,nginx version 1.6.2. For demonstration purposes, you can install a tomcat on the server. For nginx installation on the server, refer to http://www.runoob.com/linux/nginx-install-setup.html for this article. It should be noted that nginx can generally be installed in the/home directory, but the Nginx profile path is generally in the following:/usr/local/webserver/nginx/conf directory, most of the later configuration is in the nginx.conf file.
The following nginx.conf can achieve simple data distribution, when the server 80 port received data, will be forwarded to Baidu, the main configuration is in server{}, if in the browser input ip:80, then will realize jump to the Baidu page:
User www www. Worker_processes 2; #设置值和CPU核心数一致 Error_log/usr/local/webserver/nginx/logs/nginx_error.log Crit; #日志位置和日志级别 Pid/usr/local/webserver/nginx/nginx.pid; #Specifies the value for maximum file descriptors the can is opened by this process. Worker_rlimit_nofile 65535;   events {use Epoll; Worker_connections 65535;   } http {include mime.types;   Default_type Application/octet-stream; Log_format Main ' $remote _addr-$remote _user [$time _local] "$request" "$status $body _bytes_sent" $http _r Eferer "" $http _user_agent "$http _x_forwarded_for";
#charset gb2312;
Server_names_hash_bucket_size 128;   Client_header_buffer_size 32k;   Large_client_header_buffers 4 32k; Client_max_body_size 8m;
Sendfile on;   Tcp_nopush on;   Keepalive_timeout 60;   Tcp_nodelay on;   Fastcgi_connect_timeout 300;   Fastcgi_send_timeout 300;   Fastcgi_read_timeout 300;   Fastcgi_buffer_size 64k;   Fastcgi_buffers 4 64k;   Fastcgi_busy_buffers_size 128k;   Fastcgi_temp_file_write_size 128k;   gzip on;   Gzip_min_length 1k;   Gzip_buffers 4 16k;   Gzip_http_version 1.0;   Gzip_comp_level 2;   Gzip_types text/plain application/x-javascript text/css application/xml; Gzip_vary on;
#limit_zone crawler $binary _remote_addr 10m;   #下面是server虚拟主机的配置 server {Listen 80;   #监听端口 server_name localhost;         #域名 location/{index index.jsp;     Proxy_pass http://www.baidu.com;     } error_page 502 503 504/50x.html;     Location =/50x.html {root/usr/share/nginx/html; }   }
}

Nginx also has an important function of load balancing, Nginx will receive requests distributed to different servers, and these different server IP needs are configured in nginx.conf. The list of all servers that need to be forwarded is written to upstream.
User www www. Worker_processes 2; #设置值和CPU核心数一致 Error_log/usr/local/webserver/nginx/logs/nginx_error.log Crit; #日志位置和日志级别 Pid/usr/local/webserver/nginx/nginx.pid; #Specifies the value for maximum file descriptors the can is opened by this process. Worker_rlimit_nofile 65535;   events {use Epoll; Worker_connections 65535;   } http {include mime.types;   Default_type Application/octet-stream; Log_format Main ' $remote _addr-$remote _user [$time _local] "$request" "$status $body _bytes_sent" $http _r Eferer "" $http _user_agent "$http _x_forwarded_for";
#charset gb2312;
Server_names_hash_bucket_size 128;   Client_header_buffer_size 32k;   Large_client_header_buffers 4 32k; Client_max_body_size 8m;
Sendfile on;   Tcp_nopush on;   Keepalive_timeout 60;   Tcp_nodelay on;   Fastcgi_connect_timeout 300;   Fastcgi_send_timeout 300;   Fastcgi_read_timeout 300;   Fastcgi_buffer_size 64k;   Fastcgi_buffers 4 64k;   Fastcgi_busy_buffers_size 128k;   Fastcgi_temp_file_write_size 128k;   gzip on;   Gzip_min_length 1k;   Gzip_buffers 4 16k;   Gzip_http_version 1.0;   Gzip_comp_level 2;   Gzip_types text/plain application/x-javascript text/css application/xml; Gzip_vary on;
#limit_zone crawler $binary _remote_addr 10m;
Upstream backend {# All IP list server 118.178.126.250:8080 to be forwarded;   Server Other IP: port number; }
#下面是server虚拟主机的配置 server {Listen 80;     server_name localhost;         Location/{#设置主机头和客户端真实地址 so that the server obtains the client real IP proxy_set_header Host $host;         Proxy_set_header X-real-ip $remote _addr; Proxy_set_header x-forwarded-for $proxy _add_x_forwarded_for;
#禁用缓存 proxy_buffering off;
#反向代理的地址 Proxy_pass Http://backend; }   }
}
Nginx's upstream default is to achieve load balancing by polling, in which each request is allocated in chronological order to a different back-end server, which can be automatically removed if the backend server is down.         Another way is IP hash, where each request is allocated according to the result of the access hash, so that each client accesses a fixed backend server, which resolves the session problem in the first way. This way configuration, you continue to request Nginx address, the data will be distributed to different servers, if there is a server to do background monitoring, you can detect the request, we suggest you can do a stress test, according to the pressure test results to see how the load balance effect. Nginx can be deployed on one of the servers, or it may not be on a physical machine with the application server. Nginx as long as and server network interoperability, this network can be the Internet can also be LAN, if it is a local area network, then this nginx play the role of the reverse agent.


Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.