Windows environment vagrant modifies static resource files and does not take effect under Nginx Web environment in CentOS virtual machine

Source: Internet
Author: User
Tags epoll response code sendfile nginx server



Recently started Krpano, locally modified the krpano.html file or XML file, in the virtual machine environment open file is modified, in Nginx is not effective.



Modify Sendfile on http{} in nginx.conf;  Change to sendfile off; Effective immediately after modification. Comes with an Nginx configuration note, Memo.


 
###### Nginx configuration file nginx.conf Chinese detailed explanation #####

#Define users and user groups for Nginx
user www www;

#nginx Number of processes, it is recommended to set it equal to the total number of CPU cores.
worker_processes 8;
 
#Global error log definition type, [debug | info | notice | warn | error | crit]
error_log /usr/local/nginx/logs/error.log info;

#Process pid file
pid /usr/local/nginx/logs/nginx.pid;

#Specify the maximum descriptors that a process can open: the number
#Working mode and maximum number of connections
#This instruction refers to the maximum number of file descriptors opened by an nginx process. The theoretical value should be the maximum number of open files (ulimit -n) divided by the number of nginx processes. The value of ulimit -n remains the same.
#Now the number of open files under the Linux 2.6 kernel is 65535, and the worker_rlimit_nofile should be 65535 accordingly.
#This is because the allocation request to the process is not so balanced when nginx is scheduled, so if you fill in 10240, when the total concurrency reaches 3-4 million, the process may exceed 10240, and a 502 error will be returned.
worker_rlimit_nofile 65535;


events
{
    #Reference event model, use [kqueue | rtsig | epoll | / dev / poll | select | poll]; epoll model
    # Is a high-performance network I / O model in the kernel of Linux 2.6 and above. Linux recommends epoll. If running on FreeBSD, use the kqueue model.
    # Supplementary note:
    #Similar to apache, nginx has different event models for different operating systems
    #A) Standard event model
    # Select 、 poll belongs to the standard event model. If there is no more effective method in the current system, nginx will choose select or poll
    #B) Efficient event model
    #Kqueue: Used in FreeBSD 4.1+, OpenBSD 2.9+, NetBSD 2.0 and MacOS X. Using dual-processor MacOS X systems using kqueue may cause the kernel to crash.
    #Epoll: used in Linux kernel version 2.6 and later.
    # / dev / poll: Used on Solaris 7 11/99 +, HP / UX 11.22+ (eventport), IRIX 6.5.15+ and Tru64 UNIX 5.1A +.
    #Eventport: Used in Solaris 10. To prevent kernel crashes, it is necessary to install security patches.
    use epoll;

    #Maximum number of connections for a single process (maximum number of connections = number of connections * number of processes)
    #According to the hardware adjustment, use it in conjunction with the previous working process, as large as possible, but do not run the CPU to 100%. The maximum number of connections allowed by each process. In theory, the maximum number of connections per nginx server is.
    worker_connections 65535;

    #keepalivetimeout.
    keepalive_timeout 60;

    #Client request header buffer size. This can be set according to the paging size of your system. Generally, the size of a request header does not exceed 1k, but since the paging of the general system is larger than 1k, it is set to the paging size here.
    #Page size can be obtained with the command getconf PAGESIZE.
    # [[email protected] ~] # getconf PAGESIZE
    # 4096
    #But there are cases where client_header_buffer_size exceeds 4k, but the value of client_header_buffer_size must be set to an integral multiple of "system page size".
    client_header_buffer_size 4k;

    #This will specify the cache for open files, which is not enabled by default, max specifies the number of caches, it is recommended to be the same as the number of open files, inactive refers to how long after the file is not requested, the cache is deleted.
    open_file_cache max = 65535 inactive = 60s;

    #This is how often to check the cache for valid information.
    #Syntax: open_file_cache_valid time Default: open_file_cache_valid 60 Fields used: http, server, location This directive specifies when to check the valid information of the cached items in open_file_cache.
    open_file_cache_valid 80s;

    #open_file_cache The inactive parameter of the file during the minimum number of times. If this number is exceeded, the file descriptor is always opened in the cache, as in the above example, if a file is not used once in inactive time, it will be Removed.
    #Syntax: open_file_cache_min_uses number Default value: open_file_cache_min_uses 1 Fields used: http, server, location This directive specifies the minimum number of files that can be used within a certain time range among the invalid parameters of the open_file_cache directive. If a larger value is used, the file description Symbols are always turned on in the cache.
    open_file_cache_min_uses 1;
    
    #Syntax: open_file_cache_errors on | off Default value: open_file_cache_errors off Fields used: http, server, location This directive specifies whether to search for a file to record cache errors.
    open_file_cache_errors on;
}
 
 
 
#Set up the http server and use its reverse proxy function to provide load balancing support
http
{
    #File extension and file type mapping table
    include mime.types;

    #Default file type
    default_type application / octet-stream;

    #Default encoding
    #charset utf-8;

    #Hash table size of server name
    #The hash table that holds server names is controlled by the instructions server_names_hash_max_size and server_names_hash_bucket_size. The parameter hash bucket size is always equal to the size of the hash table, and it is a multiple of the processor cache size. After reducing the number of accesses in memory, it is possible to speed up the lookup of hash table key values in the processor. If the hash bucket size is equal to the size of the processor cache, then in the worst case, the number of lookups in memory is 2. The first time is to determine the address of the storage unit, and the second time to find the key value in the storage unit. Therefore, if Nginx gives a hint that you need to increase the hash max size or hash bucket size, then the first thing is to increase the size of the previous parameter.
    server_names_hash_bucket_size 128;

    #Client request header buffer size. This can be set according to your system's paging size. Generally, the size of a request's header does not exceed 1k, but since the system paging is generally larger than 1k, it is set here as the paging size. The page size can be obtained with the command getconf PAGESIZE.
    client_header_buffer_size 32k;

    #Client request header buffer size. By default, nginx will use the client_header_buffer_size buffer to read the header value. If the header is too large, it will use large_client_header_buffers to read it.
    large_client_header_buffers 4 64k;

    #Set the size of the file uploaded through nginx
    client_max_body_size 8m;

    #Enable efficient file transfer mode. The sendfile instruction specifies whether nginx calls the sendfile function to output the file. Set it to on for ordinary applications. If it is used for downloading and other application disk IO heavy load applications, you can set it to off to balance disk and network / O processing speed, reducing system load. Note: If the picture does not display properly, change this to off.
    The #sendfile directive specifies whether nginx calls the sendfile function (zero copy method) to output the file. For ordinary applications, it must be set to on. If it is used for heavy load disk IO applications such as downloading, it can be set to off to balance the disk and network IO processing speed and reduce system uptime.
    sendfile on;

    #Enable directory list access, suitable download server, closed by default.
    autoindex on;

    #This option allows or disables the use of socks TCP_CORK option, this option is only used when using sendfile
    tcp_nopush on;
     
    tcp_nodelay on;

    #Long connection timeout, unit is second
    keepalive_timeout 120;

    #FastCGI related parameters are to improve the performance of the website: reduce resource consumption and increase access speed. The following parameters can be understood literally.
    fastcgi_connect_timeout 300;
    fastcgi_send_timeout 300;
    fastcgi_read_timeout 300;
    fastcgi_buffer_size 64k;
    fastcgi_buffers 4 64k;
    fastcgi_busy_buffers_size 128k;
    fastcgi_temp_file_write_size 128k;

    #gzipmodule settings
    gzip on; #Enable gzip compressed output
    gzip_min_length 1k; #Minimum compressed file size
    gzip_buffers 4 16k; #compressed buffers
    gzip_http_version 1.0; #Compressed version (default 1.1, if the front end is squid2.5 please use 1.0)
    gzip_comp_level 2; #Compression level
    gzip_types text / plain application / x-javascript text / css application / xml; #Compression type, which already contains textml by default, so there is no need to write it below, there will be no problem writing it, but there will be a warn.
    gzip_vary on;

    #Required when turning on limiting the number of IP connections
    #limit_zone crawler $ binary_remote_addr 10m;



    #Load balancing configuration
    upstream piao.jd.com {
     
        #upstream load balancing, weight is the weight, and can be defined according to the machine configuration. The weigth parameter represents the weight, and the higher the weight, the greater the chance that it will be assigned.
        server 192.168.80.121:80 weight = 3;
        server 192.168.80.122:80 weight = 2;
        server 192.168.80.123:80 weight = 3;

        # nginx's upstream currently supports 4 ways of distribution
        # 1 Polling (default)
        #Each request is allocated to different back-end servers one by one in chronological order. If the back-end server is down, it can be automatically removed.
        # 2, weight
        #Specify polling probability, weight is proportional to the access ratio, and is used in the case of uneven performance of the back-end server.#E.g:
        #upstream bakend {
        # server 192.168.0.14 weight = 10;
        # server 192.168.0.15 weight = 10;
        #}
        # 2, ip_hash
        #Each request is allocated according to the hash result of the access IP, so that each visitor has fixed access to a back-end server, which can solve the problem of session.
        #E.g:
        #upstream bakend {
        # ip_hash;
        # server 192.168.0.14:88;
        # server 192.168.0.15:80;
        #}
        # 3, fair (third party)
        #Allocate requests according to the response time of the backend server, and give priority to the short response time.
        #upstream backend {
        # server server1;
        # server server2;
        # fair;
        #}
        # 4, url_hash (third party)
        #Distribute requests according to the hash result of the accessed URL, so that each URL is directed to the same back-end server, which is more effective when the back-end server is a cache.
        #Example: add a hash statement to the upstream, and other parameters such as weight cannot be written in the server statement. Hash_method is the hash algorithm used
        #upstream backend {
        # server squid1: 3128;
        # server squid2: 3128;
        # hash $ request_uri;
        # hash_method crc32;
        #}

        #tips:
        #upstream bakend {#Define the IP and device status of the load balancing device} {
        # ip_hash;
        # server 127.0.0.1:9090 down;
        # server 127.0.0.1:8080 weight = 2;
        # server 127.0.0.1:6060;
        # server 127.0.0.1:7070 backup;
        #}
        #Add proxy_pass http: // bakend / in the server that needs to use load balancing;

        #The status of each device is set to:
        # 1.down means that the server before the order is temporarily not participating in the load
        # 2.weight is the greater the weight, the greater the weight of the load.
        # 3.max_fails: Allows the number of failed requests to default to 1. When the maximum number is exceeded, returns the error defined by the proxy_next_upstream module
        # 4.fail_timeout: The time to pause after max_fails failures.
        # 5.backup: When all other non-backup machines are down or busy, request the backup machine. So this machine will be the lightest.

        #nginx supports setting up multiple groups of load balancing at the same time for use by unused servers.
        #client_body_in_file_only is set to On It can be said that the data from the client post is recorded to a file for debugging
        #client_body_temp_path Set the directory of the recording file. You can set up to 3 levels of directories.
        #location matches URLs. Can redirect or perform new proxy load balancing
    }
     
     
     
    #Virtual host configuration
    server
    {
        #Listener port
        listen 80;

        #Domain names can have multiple, separated by spaces
        server_name www.jd.com jd.com;
        index index.html index.htm index.php;
        root / data / www / jd;

        #Load balancing on ******
        location ~. *. (php | php5)? $
        {
            fastcgi_pass 127.0.0.1:9000;
            fastcgi_index index.php;
            include fastcgi.conf;
        }
         
        #Picture cache time settings
        location ~. *. (gif | jpg | jpeg | png | bmp | swf) $
        {
            expires 10d;
        }
         
        #JS and CSS cache time settings
        location ~. *. (js | css)? $
        {
            expires 1h;
        }
         
        #LOG Format Settings
        # $ remote_addr and $ http_x_forwarded_for are used to record the client's IP address;
        # $ remote_user: used to record the client user name;
        # $ time_local: used to record access time and time zone;
        # $ request: The URL and HTTP protocol used to record the request;
        # $ status: Used to record request status; success is 200,
        # $ body_bytes_sent: record the size of the content of the file body sent to the client;
        # $ http_referer: used to record visits from that page link;
        # $ http_user_agent: Record information about the client's browser;
        #Usually the web server is placed behind the reverse proxy so that the client's IP address cannot be obtained. The IP address obtained through $ remote_add is the iP address of the reverse proxy server. The reverse proxy server can add x_forwarded_for information to the http header information of the forwarded request to record the IP address of the original client and the server address of the original client request.
        log_format access ‘$ remote_addr-$ remote_user [$ time_local]" $ request "‘
        ‘$ Status $ body_bytes_sent" $ http_referer "‘
        ‘" $ Http_user_agent "$ http_x_forwarded_for ';
         
        #Define the access log of this virtual host
        access_log /usr/local/nginx/logs/host.access.log main;
        access_log /usr/local/nginx/logs/host.access.404.log log404;
         
        #Enable reverse proxy for "/"
        location / {
            proxy_pass http://127.0.0.1:88;
            proxy_redirect off;
            proxy_set_header X-Real-IP $ remote_addr;
             
            #The back-end web server can obtain the user's real IP through X-Forwarded-For
            proxy_set_header X-Forwarded-For $ proxy_add_x_forwarded_for;
             
            #The following are some reverse proxy configurations, optional.
            proxy_set_header Host $ host;

            #Maximum number of single file bytes allowed by the client
            client_max_body_size 10m;

            #Buffer agent buffers the maximum number of bytes requested by the client,
            #If you set it to a relatively large value, such as 256k, then it is normal to submit any image smaller than 256k using firefox or IE browser. If you comment this instruction and use the default client_body_buffer_size setting, which is twice the operating system page size, 8k or 16k, the problem occurs.
            #Whether using firefox4.0 or IE8.0, submitting a relatively large image of about 200k will return 500 Internal Server Error
            client_body_buffer_size 128k;

            # Means make nginx block responses with HTTP response code 400 or higher.
            proxy_intercept_errors on;

            # Back-end server connection timeout_initiate handshake and wait for response timeout
            #nginx Connection timeout with backend server (proxy connection timeout)
            proxy_connect_timeout 90;

            #Backend server data return time (agent sending timeout)
            #Backend server data return time_ is that the backend server must finish transmitting all data within the specified time
            proxy_send_timeout 90;

            #After the connection is successful, the backend server response time (the proxy receives timeout)
            #Connection 后 后 _Wait for the back-end server response time_In fact, it has been queued in the back-end to wait for processing (it can also be said that the back-end server processes the request)
            proxy_read_timeout 90;

            #Set the proxy server (nginx) buffer size for user header information
            #Set the buffer size of the first part of the response read from the proxy server. Normally this part of the response contains a small response header. By default, the value is the size of a buffer specified in the proxy_buffers instruction. But it can be made smaller
            proxy_buffer_size 4k;

            #proxy_buffers buffer, the average setting of web pages below 32k
            #Set the number and size of the buffer used to read the response (from the proxy server). The default is also the page size. It may be 4k or 8k depending on the operating system.
            proxy_buffers 4 32k;

            #Buffer size under high load (proxy_buffers * 2)
            proxy_busy_buffers_size 64k;

            #Set the size of the data when writing to proxy_temp_path to prevent a worker process from blocking too long when passing files
            #Set the cache folder size, greater than this value, will be transmitted from the upstream server
            proxy_temp_file_write_size 64k;
        }
#Set the address for viewing Nginx status
        location / NginxStatus {
            stub_status on;
            access_log on;
            auth_basic "NginxStatus";
            auth_basic_user_file confpasswd;
            The contents of the #htpasswd file can be generated using the htpasswd tool provided by Apache.
        }
         
        #Local dynamic and static separation reverse proxy configuration
        #All jsp pages are handled by tomcat or resin
        location ~. (jsp | jspx | do)? $ {
            proxy_set_header Host $ host;
            proxy_set_header X-Real-IP $ remote_addr;
            proxy_set_header X-Forwarded-For $ proxy_add_x_forwarded_for;
            proxy_pass http://127.0.0.1:8080;
        }
         
        #All static files are read directly by nginx without going through tomcat or resin
        location ~. *. (htm | html | gif | jpg | jpeg | png | bmp | swf | ioc | rar | zip | txt | flv | mid | doc | ppt |
        pdf | xls | mp3 | wma) $
        {
            expires 15d;
        }
         
        location ~. *. (js | css)? $
        {
            expires 1h;
        }
    }
}
###### Nginx configuration file nginx.conf Chinese detailed explanation #####


Configuration Description Source: http://www.cnblogs.com/hunttown/p/5759959.html



Windows environment vagrant modifies static resource files and does not take effect under Nginx Web environment in CentOS virtual machine


Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.