Nginx Performance Tuning

Source: Internet
Author: User
Tags sendfile terminates

Nginx Performance Tuning

Address: http://nginx.com/blog/tuning-nginx/

Tuning NGINX for Performance

Nginx Performance Tuning

NGINX is well known as a high performance load balancer, cache and web server, powering over 40% of the busiest websites in the world. most of the default NGINX and Linux settings work well for most use cases, but it can be necessary to do some tuning to achieve optimal performance. this blog post will discuss some of the NGINX and Linux settings to consider when tuning a system. there are available settings available, but for this post we will cover the few settings recommended for most users to consider adjusting. the settings not covered in this post are ones that shoshould only be considered by those with a deep understanding of NGINX and Linux, or after a recommendation by the NGINX support or professional services teams. NGINX professional services has worked with some of the world's busiest websites to tune NGINX to get the maximum level of performance and are available to work with any customer who needs to get the most out their system.

Nginx is well-known for its high-performance load balancing, caching, and web servers. It provides support for 40% of the busiest websites in the world. In most cases, the default Nginx and Linux configurations can be met. However, sometimes it is necessary to debug for better performance. This article will discuss the Nginx and Linux settings that need to be considered when debugging a system. There are a lot of settings available, but in this blog we only involve a few of the settings recommended by most users during debugging. The configuration items not mentioned in this article are usually used by users who have a deep understanding of Nginx and Linux, or recommended by the official Nginx or professional service team. Nginx professional services help the most visited websites in the world debug Nginx to achieve the highest performance, and customers who want to make full use of their systems.

 

Introduction

Introduction

A basic understanding of the NGINX architecture and configuration concepts is assumed. This post will not attempt to duplicate the NGINX documentation, but will provide an overview of the various options with links to the relevant documentation.

This article assumes that the reader has a basic understanding of the Nginx architecture and configuration concepts, so it is not a simple copy of an Nginx document, however, an overview of various options and links to relevant documents are provided.

A good rule to follow when doing tuning is to change one setting at a time and if it does not result in a positive change in performance, then to set it back to the default value.

A good principle is to modify only one configuration at a time during optimization. If the modification of the configuration cannot improve the performance, change the default value back.

We will start with a discussion of Linux tuning since some of these values can impact some of the values you will use for your NGINX configuration.

We will start from Linux optimization because some values will affect some configuration parameters you use when tuning Nginx.

Linux Configuration

Linux Configuration

Modern Linux kernels (2.6 +) do a good job in sizing the varous settings but there are some settings that you may want to change. if the operation system settings are too low then you will see errors in the kernel log to help indicate that you shoshould adjust them. there are using possible Linux settings but we will cover those settings that are most likely in need of tuning for normal workloads. please refer to Linux documentation for details on adjusting these settings.

The popular Linux kernel (later than 2.6) has done a good job in adjusting the size of various settings, but there are also some settings that you may want to modify. If your operating system settings are too low and you see the error message in the kernel log, it indicates that you should adjust the configuration. There may be a lot of Linux configurations, but we will only discuss a few that need to be used for general workload optimization. Refer to the Linux documentation to obtain the details of these adjusted configuration items.

The Backlog Queue

Backlog queue

The following settings relate directly to connections and how they are queued. if you have high rate of incoming connections and you are setting uneven levels of performance, for example some connections appear to be stalling, then running these settings may help.

The following configurations are directly related to how to queue connections and connections. If you enable high-speed access and your performance configuration is not balanced, for example, if some connections are delayed, the following Optimization Configuration will play a role.

Net. core.Somaxconn: This sets the size of the queue for connections waiting for NGINX to accept them. since NGINX accepts connections very quickly, this value does not usually need to be very large, but the default can be very low, so increasing can be a good idea if you have a high traffic website. if the setting is too low then you shocould see error message in the kernel log and increase this value until the errors stop. note: if you set this to a value greater then 512, you shoshould change your NGINX configuration usingBacklogParameter ofListenDirective to match this number.

Net. core. somaxconn: set the size of the connection queue waiting for Nginx to receive. Because Nginx accepts the connection very quickly, this value is generally not too large, but the default value is very low, so if your website traffic is high, it is a good way to increase this value. If this value is too low, you will see the error message in the kernel log, it will keep increasing this value until no error is reported. Note: If you set this value to greater than 512, You need to modify the backlog parameter of the listen command in the Nginx configuration to be equal to this number. Note: The backlog parameter of the listen command sets the maximum length of the queue waiting for EDGE connection. By default, backlog is set to-1 under FreeDSB and Mac OS X, and 511 for other platforms)

Net. core.Netdev_max_backlog: This sets the rate at which packets can be buffered by the network card before being handed off the CPU. for machines with a high amount of bandwidth this value may need to increased. check the documentation for your network card for advice on this setting or check the kernel log for errors relating to this setting.

Net. core. netdev_max_backlog: Specifies the rate at which the package will be buffered by the NIC before being transferred to the CPU. This value needs to be increased when the host needs a large amount of traffic. Check your Nic documentation's suggestions for this setting, or check the kernel log for errors related to this item.

File Descriptors

File descriptor

File descriptors are operating system resources used to handle things such as connections and open files. NGINX can use up to two file descriptors per connection, for example if it is proxying, then it can have one for the client connection and another for the connection to the proxied server, although if HTTP keepalives are used this ratio will be much lower. for a system that will see a large number of connections, these settings may need to be adjusted:

File descriptors are used to process operating system resources such as connected or opened files. Two descriptors can be created for each Nginx connection. For example, if it is acting as a proxy, it has a connection pointing to the client and a connection pointing to the proxy server, although this ratio is low when HTTP persistent connections are used.

Sys. fs.File_max: This is the system wide limit for file descriptors.

Sys. fs. file_max: This is the system scope limit of the file descriptor.

Nofile: This is the user file descriptor limit and is set in the/etc/security/limits. conf file.

Nofile is the limitation of user file descriptors. It is set in the/etc/security/limits. conf file.

Ephemeral ports

Temporary Port

When NGINX is acting as a proxy, each connection to an upstream server uses a temporary, or ephemeral port.

When Nginx acts as a proxy, each connection to the upstream server uses a temporary port.

Net. ipv4.Ip_local_port_range: This specifies the starting and ending port value to use. If you see that you are running out of ports, you can increase this range. A common setting it use ports 1024 to 65000.

Net. ipv4.ip _ local_port_range: Specifies the range of available ports. If you find that you are running beyond these ports, you need to increase the range. The general setting range is 1024 to 65000.

Net. ipv4.Tcp_fin_timeout: This specifies how long after port is no being used that it can be used again for another connection. This usually ults to 60 seconds but can usually be safely CED to 30 or even 15 seconds.

Net. ipv4.tcp _ fin_timeout: Specifies how long a port has not been used before it can be used by other connections. Generally, the default value is 60 seconds, but it is safer to reduce the value to 30 seconds or even 15 seconds.

NGINX Configuration

Nginx Configuration

The following are some NGINX ctives ves that can impact performance. as stated above, we will only be discussing those ctictives that we recommend most users look at adjusting. any directive not mentioned here is one that we recommend not to be changed without direction from the Nginx team.

The following are some Nginx commands that will affect the performance. As mentioned above, we only discuss commands recommended by most users during debugging. We recommend that you do not modify the commands that are not mentioned in this Article without the guidance of the Nginx team.

Worker Processes

Worker Process

NGINX can run multiple worker processes, each capable of processing a large number of connections. You can control how many worker processes are run and how connections are handled with the following ctictives:

Nginx can run multiple worker processes, and each worker process can process a large number of connections. You can use the following instruction set to control how many worker processes are running, and if the connection is handled:

Worker_processes: This controls the number of worker processes that NGINX will run. in most cases, running one worker process per CPU core works well. this can be achieved by setting this directive to "auto ". there are times when you may want to increase this number, such as when the work processes have to do a lot of disk I/O. the default is 1.

Worker processes: This is a running process of Nginx. In most cases, several worker processes run on several CPU cores. This value can also be obtained by setting this command to "auto. You also need to increase the number of times, for example, when a worker process needs to read and write data from a large number of disks. The default value is 1.

Worker_connections: This is the maximum number of connections that can be processed at one time by each worker process. the default is 512, but most systems can handle a larger number. what this number shocould be set to will depend on the size of the server and the nature of the traffic and can be discovered through testing.

Worker_connections: the maximum number of connections that a worker can process simultaneously. The default value is 512, but most systems can process a larger number. The value of this item depends on the nature of the server and the large and small traffic, which can be obtained through testing.

Keepalives

Keepalives

Keepalive connections can have a major impact on performance by tuning the CPU and network overhead needed for opening and closing connections. NGINX terminates all client connections and has separate and independent connections to the upstream servers. NGINX supports keepalives for the client and upstream servers. the following directives deal with client keepalives:

Keepalive connections can affect performance by reducing the overhead required by the CPU and network to enable and disable connections. Nginx terminates all client requests and isolates and independently connects to the upstream server. Nginx supports persistent connections between clients and upstream servers. The following commands process the keepalives client:

Keepalive_requests: This is the number of requests a client can make over a single keepalive connection. the default is 100, but can be set to a much higher value and this can be especially useful for testing where the load generating tool is sending into requests from a single client.

Keepalive_requests: the number of requests that a client can connect to through a keepalive. The default value is 100, but it can also be very high, and this is very useful for testing where the load generation tool uses a client to send so many requests.

Keepalive_timeout: How long a keepalive connection will remain open once it becomes idle.

The following directives deal with upstream keepalives:

Keepalive_tiimeout: How long can a keepalive connection be enabled after it is idle.

The following commands process upstream keepalives:

Keepalive: This specifies the number of idle keepalive connections to an upstream server that remain open for each worker process. There is no default value for this directive.

Keepalive: specifies the number of connections of idle upstream servers that a worker process keeps open. This item has no default value.

To enable keepalive connections to the upstream you must add the following ctictives:

Proxy_http_version 1.1;
Proxy_set_header Connection "";

To enable the keepalive connection of upstream, you must add the following command:

Proxy_http_version 1.1;
Proxy_set_header Connection "";

Access Logging

Access Logging

Logging each requests takes both CPU and I/O cycles and one way to reduce this impact is to enable access log buffering. this will cause NGINX to buffer a series of log entries and write them to the file at one time rather then as separate write operation. access log buffering is enabled by specifying the "buffer = size" option of the access_log directive. this sets the size of the buffer to be used. you can also use the "flush = time" option to tell NGINX to write the entries in the buffer after this amount of time. with these two options defined, NGINX will write entries to the log file when the next log entry will not fit into the buffer or if the entries in the buffer are older than the time specified for the flush parameter. log entries will also be written when a worker process is re-opening log files or is shutting down. it is also possible to disable access logging completely.

The CPU and I/O cycles required to record each request at the same time. One way to reduce this impact is to enable the access log cache. Once opened, Nginx can buffer a pile of logs into the file at a time without writing each log. Access log buffering is enabled by setting the "buffer = size" option of access_log. This item sets the buffer size. You can also use the "flush = time" item to set the interval for Nginx to write all data in the buffer zone to the file. After these two items are defined, nginx writes all entries in the buffer into the log file when the buffer zone is full or when the entry generation time in the buffer zone is earlier than the time specified by the flush parameter. Log records are also written when the working process re-opens or closes the log file. This may also thoroughly access logs.

Sendfile

Sendfile is an operating system feature that can be enabled on NGINX. it can provide for faster tcp data transfers by doing in-kernel copying of data from one file descriptor to another, often achieving zero-copy. NGINX can use it to write cached or on-disk content down a socket, without any context switching to user space, making it extremely fast and using less CPU overhead. because the data never touches user space, it's not possible to insert filters that need to access the data into the processing chain, so you cannot use any of the NGINX filters that change the content, e.g. the gzip filter. it is disabled by default.

Sendfile is an operating system function that Nginx can enable. It can provide faster tcp data transmission by copying data from one file descriptor to another in the kernel. Generally, it can achieve zero copy. Nginx can use this function to write the cache or the content on the disk through sockets without any context switching to the user space. This saves extremely fast and consumes less CPU overhead. Because data cannot enter the user space, it is impossible to insert the filter required in the process chain. Therefore, you cannot use any Nginx filter to modify the content, such as the gzip filter, disabled by default.

Limits

NGINX and NGINX Plus allow you to set varous limits that can be used to help control the resources consumed by clients and therefore impact the performance of your system and also affect user experience and security. the following are some of these ctictives:

Nginx and Nginx can be added with various restrictions to help control resource consumption from clients, improve system performance, and improve user experience and security. The following are some instructions:

Limit_conn/Limit_conn_zone: These directives can be used to limit the number of connections NGINX will allow, for example from a single client IP address. this can help prevent individual clients from opening too login connections and consuming too login resources.

Linut_conn/limit_conn_zone: These two commands are used to limit the number of connections allowed by Nginx, for example, the number of connections from an IP address. It can help prevent individual clients from consuming excessive resources by opening many connections.

Limit_rate: This will limit the amount of bandwidth allowed for a client on a single connection. this can prevent the system from being overloaded by certain clients and can help to ensure that all clients receive good quality of service.

Limit_rate: limits the bandwidth of a single connection. It can prevent the system from being overloaded by some clients and ensure that all users enjoy quality services.

Limit_req/Limit_req_zone: These directives can be used to limit the rate of requests being processed by NGINX. as with limit_rate this can help prevent the system from being overloaded by certain clients and can help to ensure that all clients receive good quality of service. they can also be used to improve security, especially for login pages, by limiting the requests rate so that it is adequate for a human user but one that will slow programs trying to access your application.

Limit_req/limit_req_zone: these commands are used to limit the rate of requests being processed by Nginx. Using limit_rate can prevent some clients from being overloaded and ensure that all customers can obtain high-quality services. These commands can also improve security, especially on login pages, by limiting the request rate to make requests more suitable for human users, slowing down the users trying to access your application.

Max_conns: This is set for a server in an upstream group and is the maximum number of simultaneous connections allowed to that server. this can help prevent the upstream servers from being overloaded. the default is zero, meaning that there is no limit.

Max_conns: specifies the maximum number of servers allowed to connect to the upstream group at the same time. Limits the overload of upstream servers. The default value is 0, which is unlimited.

Queue: If max_conns is set for any upstream servers, then the queue directive governs what happens when a request cannot be processed because there are no available servers in the upstream group and some of those servers have reached the max_conns limit. this directive can be set to the number of requests to queue and for how long. if this directive is not set, then no queueing will occur.

Queue: if there is a max_conns item configured by the upstream server, the queue command will be used when there is a request because there are no servers in the upstream group and some servers have reached the max_conns limit. The queue command determines the size and length of the Request queue. If this value is not configured, no queue is generated.

Additional considerations

Other considerations

There are additional features of NGINX that can be used to increase the performance of a web application that don't really fall under the heading of tuning but are worth mentioning because their impact can be considerable. we will discuss two of these features.

There are some Nginx functions that do not have to be placed under the title of tuning to Improve the Performance of a website application, but we still need to mention that their impact is worth noting. We will discuss two of these features.

Caching

Cache

By enabling caching on an NGINX instance that is load balancing a set of web or application servers, you can dramatically increase the response time to the client while at the same time dramatically loading the load on the backend servers. caching is a subject of its own and will not be covered here. for more information on operating ating NGINX for caching please see: NGINX Admin Guide-Caching.

Enabling caching on a group of Server Load balancer websites or application servers can dramatically increase the load on backend servers while reducing the load on the backend servers: how does it mean optimization or reduction? Is the author wrong ?) The response time to the client. Cache is the subject of Nginx. For more information, see NGINX Management Guide-cache.

Compression

Compression

Compressing the responses to clients can greatly reduce the size of the responses, requiring less bandwidth, however it does require CPU resources to do the compression so is best used when there is value to raise bandwidth. it is important to note that you shoshould not enable compression for objects that are already compressed, such as your S. for more information on processing ing NGINX for compression please see: NGINX Admin Guide-Compression and Decompression

The response compressed to the client can significantly reduce the response size, reduce the required bandwidth, and then consume CPU resources to compress the data. Therefore, it is useful to reduce the bandwidth. Note that if your object has been compressed, such as ipvs, you do not need to enable compression again.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.