Configuring Nginx for load Balancing and HTTPS

Source: Internet
Author: User
Tags epoll sendfile nginx server

Nginx configuration system consists of a master configuration file and some other auxiliary configuration files. These configuration files are plain text files, all in the Conf directory under the Nginx installation directory.

The line starting with # in the configuration file, or a number of spaces or tabs in front of it, followed by the lines of # are considered comments, meaning that only the user who edits the file is meaningful, and the actual content of the program is ignored when it reads the comment lines.

Because files other than the Master profile nginx.conf are used in some cases, only the master profile is used in all cases. So here we take the main configuration file as an example to explain the Nginx configuration system.

In nginx.conf, several configuration items are included. Each configuration item consists of a configuration directive and 2 parts of the directive parameter. The directive parameter is the configuration value corresponding to the configuration directive.

Here is the configuration of my nginx server for load balancing and HTTPS

worker_processes; #启动进程, usually set to equal to the number of CPUs #工作模式及连接数上限 events {worker_connections; maximum concurrent links #单个后台worker process processes Number} #设定http服务器, using its reverse proxy feature to provide load balancing support for HTTP {include mime.types; #设定mime类型, the type is defined by the Mime.type file Default_type applic Ation/octet-stream; #默认文件类型 sendfile on; Specifies whether Nginx calls the Sendfile function (zero copy mode) to output the file, for normal applications, it must be set to ON

    , #如果用来进行下载等应用磁盘IO重负载应用, can be set to off to balance disk and network I/O processing speed and reduce the uptime of the system.
        Keepalive_timeout, #连接超时时间, per second #设定负载均衡的服务器列表 upstream riskraiders {server 172.16.0.52:443; Server 172.16.0.53:443; #weight the =5;weigth parameter represents weights, the higher the weight, the greater the probability of being allocated} server {listen; #侦听80端口 listen 4 #侦听443端口 server_name fk.riskraider.com; #定义使用fk. riskraider.com Access Ssl_protocols TLSv1.2 TLSv1.1 TLSV
        1;
        SSL_CERTIFICATE/ETC/SSL/FK.RISKRAIDER.COM.CRT;
        Ssl_certificate_key/etc/ssl/fk.riskraider.com.key;

        Ssl_prefer_server_ciphers on;
  #自动跳转到HTTPS      if ($server _port = $) {Rewrite ^ (. *) $ https://$host $ permanent; } location/{root/ljzx; #定义服务器的默认网站根目录位置 proxy_pass https://riskraiders; #请求转向riskraid
            ERS-defined list of servers Proxy_set_header Host $host;
            Proxy_set_header X-real-ip $remote _addr; Proxy_set_header x-forwarded-for $proxy _add_x_forwarded_for; #后端的Web服务器可以通过X-forwarded-for get the user real IP proxy_set_he
            Ader X-forwarded-proto HTTPS;

            Proxy_next_upstream off;
            Proxy_connect_timeout; #nginx跟后端服务器连接超时时间 (proxy connection timeout) Proxy_read_timeout; #连接成功后, back-end server response time (proxy receive timeout)
 Proxy_send_timeout; #后端服务器数据回传时间 (proxy send Timeout)}}}

The configuration information in nginx.conf, according to its logical meaning, classifies them, that is, it is divided into multiple scopes, or is called the configuration Directive context. The different scopes contain one or more configuration items.
main: Some parameters, such as the number of working processes, the identity of the operation, are not related to the specific business functions (such as HTTP service or email service proxy) at runtime.
http: Some configuration parameters related to providing HTTP services. For example: whether to use keepalive, whether to use gzip compression, and so on.
Server: Several virtual hosts are supported on the HTTP service. Each virtual host has a corresponding server configuration item, and the configuration item contains the virtual host-related configuration. When you provide a proxy for the mail service, you can also establish several servers. Each server is differentiated by the address of the listener.
Location: A series of configuration items that correspond to some specific URLs in the HTTP service.
Mail: When implementing an email-related SMTP/IMAP/POP3 agent, some of the shared configuration items (because multiple proxies may be implemented, work on multiple listening addresses).

The following is a more complete configuration file for everyone to refer to

#定义Nginx运行的用户和用户组 user www www;
#nginx进程数, the recommended setting is equal to the total CPU core number.

Worker_processes 8;

#全局错误日志定义类型, [Debug | info | notice | warn | error | crit] Error_log/usr/local/nginx/logs/error.log info;

#进程pid文件 Pid/usr/local/nginx/logs/nginx.pid; #指定进程可以打开的最大描述符: The number #工作模式与连接数上限 #这个指令是指当一个nginx进程打开的最多文件描述符数目, the theoretical value should be the maximum number of open files (ulimit-n) and the number of nginx processes,
However, the Nginx allocation request is not uniform, so it is best to keep the Ulimit-n value consistent.
#现在在linux 2.6 The Open file opening number for 65535,worker_rlimit_nofile should be completed 65535.
#这是因为nginx调度时分配请求到进程并不是那么的均衡, so if you fill 10240, the total concurrency reaches 340,000, the process may be more than 10240, this will return 502 error.


Worker_rlimit_nofile 65535; Events {#参考事件模型, use [kqueue | rtsig | epoll |/dev/poll | select | poll] Epoll model #是Linux High-performance network I/O model in kernel 2.6 or higher
    , Linux recommends Epoll, if run on FreeBSD above, use Kqueue model.
    #补充说明: #与apache相类, Nginx for different operating systems, there are different event models #A) standard event Model #Select, poll belong to the standard event model, if the current system does not have a more efficient method, Nginx will choose Select or poll
    #B) Efficient event model #Kqueue: Used with FreeBSD 4.1+, OpenBSD 2.9+, NetBSD 2.0 and MacOS X. Using the kqueue with a dual-processor MacOS X system can cause the kernel to crash.
    #Epoll: Used in Linux kernel version 2.6 and later systems. #/dev/poll: UsingFor Solaris 7 11/99+,hp/ux 11.22+ (eventport), IRIX 6.5.15+ and Tru64 UNIX 5.1a+. #Eventport: Used for Solaris 10.
    To prevent kernel crashes, it is necessary to install security patches.

    Use Epoll; #单个进程最大连接数 (maximum number of connections = number of connections * processes) #根据硬件调整, work with the previous worker process, as large as possible, but don't run the CPU to 100%.
    The maximum number of connections allowed per process, in theory, for each nginx server.

    Worker_connections 65535;
    #keepalive超时时间.

    Keepalive_timeout 60; #客户端请求头部的缓冲区大小.
    This can be set according to your system paging size, generally a request header size will not exceed 1k, but because the general system paging is greater than 1k, so this is set to paging size.
    #分页大小可以用命令getconf PAGESIZE obtained. #[root@web001 ~]# getconf PAGESIZE #4096 #但也有client_header_buffer_size超过4k的情况, but client_header_buffer_size the value must be set to "system
    Integer multiple of the system paging size.

    Client_header_buffer_size 4k;
    #这个将为打开文件指定缓存, the default is not enabled, Max Specifies the number of caches, the recommended number of open files, and inactive refers to how long the file has not been requested to delete the cache.

    Open_file_cache max=65535 inactive=60s;
    #这个是指多长时间检查一次缓存的有效信息. #语法: Open_file_cache_valid Time Default value: Open_file_cache_valid 60 use field: HTTP, server, location This directive specifies when to check for Open_file_
    Valid information for cached items in the cache.

    Open_file_cache_valid 80s; #open_file_cache指令中的inactive参数时间内文件的最少使用次数, if this number is exceeded, the file descriptionCharacter is always opened in the cache, as in the example above, if a file is not used once in inactive time, it will be removed. #语法: open_file_cache_min_uses Number default: Open_file_cache_min_uses 1 Using fields: HTTP, server, location This directive specifies the Open_file_
    The minimum number of files that can be used in a certain time range in an invalid parameter of the cache instruction, and if a larger value is used, the file descriptor is always open in the cache.

    Open_file_cache_min_uses 1; #语法: Open_file_cache_errors on |
    Off default: open_file_cache_errors off use field: HTTP, server, location This directive specifies whether the search for a file is a record cache error.
Open_file_cache_errors on;

    #设定http服务器, use its reverse proxy feature to provide load balancing support for HTTP {#文件扩展名与文件类型映射表 include mime.types;

    #默认文件类型 Default_type Application/octet-stream;

    #默认编码 #charset Utf-8; #服务器名字的hash表大小 controlled by the #保存服务器名字的hash表是由指令server_names_hash_max_size and Server_names_hash_bucket_size. The parameter hash bucket size is always equal to the size of the hash table and is a multiple of the processor cache size. After reducing the number of accesses in memory, it is possible to speed up the lookup of hash table key values in the processor. If a hash bucket size equals the size of a processor cache, the worst case lookup of a key is 2 in memory. The first is to determine the address of the storage unit, and the second is to find the key value in the storage unit.
    Therefore, if Nginx gives the hint to increase the hash max size or hash bucket size, it is important to increase the size of the previous parameter.

    Server_names_hash_bucket_size 128; #客户端请求头部的缓冲区大小. This canAccording to your system paging size to set, generally a request for the head size will not exceed 1k, but because the general system paging is greater than 1k, so this is set to paging size.
    The paging size can be obtained with the command getconf pagesize.

    Client_header_buffer_size 32k; #客户请求头缓冲大小.
    Nginx defaults to use client_header_buffer_size this buffer to read the header value, if the header is too large, it will use Large_client_header_buffers to read.

    Large_client_header_buffers 4 64k;

    #设定通过nginx上传文件的大小 client_max_body_size 8m; #开启高效文件传输模式, the sendfile instruction specifies whether Nginx calls the Sendfile function to output the file, and for normal applications to be set to ON, if it is used for downloading applications such as disk IO heavy load applications, can be off to balance disk and network I/O processing speed, Reduce the load on the system.
    Note: If the picture does not appear normal, change this to off. #sendfile指令指定 Nginx does not call the Sendfile function (zero copy mode) to output the file, for normal applications, must be set to ON.
    If you are using a disk IO heavy load application for download, you can set it off to balance disk and network IO processing speed and reduce system uptime.

    Sendfile on;
    #开启目录列表访问, the appropriate download server, the default shutdown.

    AutoIndex on;

    #此选项允许或禁止使用socke的TCP_CORK的选项, this option uses Tcp_nopush on only when using Sendfile;

    Tcp_nodelay on;

    #长连接超时时间, Unit is seconds keepalive_timeout 120; #FastCGI相关参数是为了改善网站的性能: Reduce resource usage and improve access speed.
    The following parameters can be understood by literal means.
    Fastcgi_connect_timeout 300;
    Fastcgi_send_timeout 300;
    Fastcgi_read_timeout 300;
    Fastcgi_buffer_size 64k; Fastcgi_buFfers 4 64k;
    Fastcgi_busy_buffers_size 128k;

    Fastcgi_temp_file_write_size 128k; #gzip模块设置 gzip on;    #开启gzip压缩输出 Gzip_min_length 1k;    #最小压缩文件大小 gzip_buffers 4 16k;    #压缩缓冲区 Gzip_http_version 1.0;    #压缩版本 (default 1.1, front End If squid2.5 please use 1.0) Gzip_comp_level 2;    #压缩等级 gzip_types text/plain application/x-javascript text/css application/xml;
    #压缩类型, the default is already included textml, so there is no need to write, write up will not have a problem, but there will be a warn.

    Gzip_vary on;



    #开启限制IP连接数的时候需要使用 #limit_zone crawler $binary _remote_addr 10m; #负载均衡配置 upstream piao.jd.com {#upstream的负载均衡, weight is a weight that can be defined based on machine configuration.
        The Weigth parameter represents weights, and the higher the weight, the greater the probability of being allocated.
        Server 192.168.80.121:80 weight=3;
        Server 192.168.80.122:80 weight=2;

        Server 192.168.80.123:80 weight=3;
        #nginx的upstream目前支持4种方式的分配 #1, polling (default) #每个请求按时间顺序逐一分配到不同的后端服务器, which can be automatically rejected if the backend server is down.
        #2, weight #指定轮询几率, weight and access ratios are proportional to the performance inequality of the backend servers. #例如: #upstream bakend {# server 192.168.0.14 weight=10;
        # server 192.168.0.15 weight=10;
        #} #2, Ip_hash #每个请求按访问ip的hash结果分配, so that each visitor fixed access to a back-end server, can solve the session problem.
        #例如: #upstream bakend {# ip_hash;
        # server 192.168.0.14:88;
        # server 192.168.0.15:80;
        #} #3, fair (third-party) #按后端服务器的响应时间来分配请求, with short response times for priority assignments.
        #upstream Backend {# server Server1;
        # server Server2;
        # fair;
        #} #4, Url_hash (third-party) #按访问url的hash结果来分配请求 to direct each URL to the same back-end server, which is more efficient when the backend server is cached. #例: Add a hash statement in upstream, the server statement can not write weight and other parameters, Hash_method is the use of the hash algorithm #upstream backend {# server Squi
        d1:3128;
        # server squid2:3128;
        # hash $request _uri;
        # Hash_method CRC32;
        #} #tips: #upstream bakend{#定义负载均衡设备的Ip及设备状态} {# ip_hash;
        # server 127.0.0.1:9090 down;
        # server 127.0.0.1:8080 weight=2; # server 127.0.0.1:6060;
        # server 127.0.0.1:7070 backup;

        #} #在需要使用负载均衡的server中增加 Proxy_pass http://bakend/;
        #每个设备的状态设置为: #1. Down indicates that the server in front of the single is temporarily not participating in the load #2. Weight for weight, the larger the load, the greater the weight.
        #3. Max_fails: The number of failed requests defaults to 1. When the maximum number of times is exceeded, the error #4 defined by the Proxy_next_upstream module is returned. The time that was paused after Fail_timeout:max_fails failed. #5. Backup: When all other non-backup machines are down or busy, request the backup machine.

        So the pressure on this machine is the lightest.
        #nginx支持同时设置多组的负载均衡, used to use for unused servers.
        #client_body_in_file_only设置为On can tell the data recorded by the client post to be used in the file for debug #client_body_temp_path设置记录文件的目录 can set up up to 3 levels of directory

        #location对URL进行匹配. You can redirect or perform new proxy load Balancing} #虚拟主机的配置 server {#监听端口 Listen 80;
        #域名可以有多个, separated by a space server_name www.jd.com JD.com;
        Index index.html index.htm index.php;

        ROOT/DATA/WWW/JD; #对 ****** Load Balancer location ~. *.
            (PHP|PHP5) $ {fastcgi_pass 127.0.0.1:9000;
            Fastcgi_index index.php; Include FASTCgi.conf; } #图片缓存时间设置 Location ~. *.
        (gif|jpg|jpeg|png|bmp|swf) $ {expires 10d; } #JS和CSS缓存时间设置 Location ~. *.
        (JS|CSS)? $ {Expires 1h; } #日志格式设定 # $remote _addr and $http_x_forwarded_for to record the IP address of the client; # $remote _user: Used to record the client user name; # $t  Ime_local: Used to record access time and time zone; # $request: The URL used to log the request and the HTTP protocol; # $status: Used to record the request status; Success is the same, # $body _bytes_sent : Record sent to client file principal content size; # $http _referer: Used to record links accessed from that page; # $http _user_agent: Record information about the customer's browser; #通常web服务器放在反向 After the agent, this will not be able to obtain the client's IP address, the IP address obtained through $remote_add is the IP address of the reverse proxy server.
        In the HTTP header information of the forwarding request, the reverse proxy server can increase the X_forwarded_for information to record the IP address of the original client and the server address of the original client's request. Log_format access ' $remote _addr-$remote _user [$time _local] "$request" "$status $body _bytes_sent" $http _referer

        "'" $http _user_agent "$http _x_forwarded_for '; #定义本虚拟主机的访问日志 Access_log/usr/local/nginx/logs/host.access.log MaiN

        Access_log/usr/local/nginx/logs/host.access.404.log log404;
            #对 "/" Enable reverse proxy location/{Proxy_pass http://127.0.0.1:88;
            Proxy_redirect off;

            Proxy_set_header X-real-ip $remote _addr;

            #后端的Web服务器可以通过X-forwarded-for Get the user real IP proxy_set_header x-forwarded-for $proxy _add_x_forwarded_for;
            #以下是一些反向代理的配置, optional.

            Proxy_set_header Host $host;

            #允许客户端请求的最大单文件字节数 client_max_body_size 10m; #缓冲区代理缓冲用户端请求的最大字节数, #如果把它设置为比较大的数值, such as 256k, it is normal to submit any image that is less than 256k, whether using Firefox or Internet Explorer.
            If you comment on the instruction, using the default Client_body_buffer_size setting, which is twice times the size of the operating system page, 8k or 16k, the problem arises.

            #无论使用firefox4.0 or IE8.0, submit a relatively large, 200k or so picture, all returned to the Internal Server error error client_body_buffer_size 128k;
            #表示使nginx阻止HTTP应答代码为400或者更高的应答.

            Proxy_intercept_errors on; #后端服务器连接的超时时间_发起握手等候响应超时时间 #nginx跟后端服务器连接超时时间 (proxy connection timed out) Proxy_cOnnect_timeout 90;

            #后端服务器数据回传时间 (proxy send timeout) #后端服务器数据回传时间_就是在规定时间之内后端服务器必须传完所有的数据 Proxy_send_timeout 90; #连接成功后, the back-end server response time (proxy receive timeout) #连接成功后_等候后端服务器响应时间_其实已经进入后端的排队之中等候处理 (or the time that the back-end server handles the request) Proxy_read_timeou

            T 90; #设置代理服务器 (Nginx) Save the user header information buffer size #设置从被代理服务器读取的第一部分应答的缓冲区大小, usually this part of the answer contains a small answer header, by default, the size of this value is specified in the instruction proxy_buffers a slow

            The size of the flushing area, but it can be set to a smaller proxy_buffer_size 4k; #proxy_buffers缓冲区, the average number and size of buffers in a Web page below 32k #设置用于读取应答 (from the proxy server), the default is paging size, depending on the operating system may be 4k or 8k proxy_b

            Uffers 4 32k;

            #高负荷下缓冲大小 (proxy_buffers*2) proxy_busy_buffers_size 64k; #设置在写入proxy_temp_path时数据的大小, prevent a worker process from blocking too long #设定缓存文件夹大小 when passing a file, greater than this value, will be transmitted from the upstream server Proxy_temp_file_wri
        Te_size 64k;
            } #设定查看Nginx状态的地址 location/nginxstatus {stub_status on;
            Access_log on; Auth_basic "NginxstatuS ";
            Auth_basic_user_file confpasswd;
        #htpasswd文件的内容可以用apache提供的htpasswd工具来产生. } #本地动静分离反向代理配置 #所有jsp的页面均交由tomcat或resin处理 Location ~.
            (Jsp|jspx|do) $ {proxy_set_header Host $host;
            Proxy_set_header X-real-ip $remote _addr;
            Proxy_set_header x-forwarded-for $proxy _add_x_forwarded_for;
        Proxy_pass http://127.0.0.1:8080; } #所有静态文件由nginx直接读取不经过tomcat或resin Location ~. *.
        (htm|html|gif|jpg|jpeg|png|bmp|swf|ioc|rar|zip|txt|flv|mid|doc|ppt| 
        PDF|XLS|MP3|WMA) $ {expires 15d; } location ~. *.
        (JS|CSS)? $ {Expires 1h; }
    }
}

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.