High-performance HTTP accelerator Varnish (Performance Tuning)

Source: Internet
Author: User
Tags varnish
Reference: whether http://ixdba.blog.51cto.com/2895551/708021 varnish can run stably and quickly depends heavily on Linux optimization and varnish parameter settings. After varnish installation configuration is complete, the varnish server performance must also be optimized from the operating system and varnish configuration parameters to maximize varnish performance. I. Optimize Linux Kernel ParametersThe kernel parameter is an interface for interaction between the user and the system kernel. Through this interface, the user can dynamically update the Kernel configuration while the system is running, these kernel parameters exist through the Linux proc file system. Therefore, you can optimize the performance by adjusting the proc file system. The following parameter is an official configuration. The content is as follows: net. ipv4.ip _ local_port_range = 1024 65536net. core. rmem_max = 16777216net. core. wmem_max = 16777216net. ipv4.tcp _ rmem = 4096 87380 16777216net. ipv4.tcp _ WMEM = 4096 65536 16777216net. ipv4.tcp _ fin_timeout = 30net. core. netdev_max_backlog = 30000. Net. ipv4.tcp _ no_metrics_save = 1net. core. somaxconn = 262144net. ipv4.tcp _ syncookies = 1net. ipv4.tcp _ max_orphans = 262144net. ipv4.tcp _ max_syn_backlog = 262144net. ipv4.tcp _ synack_retries = 2net. ipv4.tcp _ syn_retries = 2 each of the preceding options has the following meanings: net. ipv4.ip _ local_port_range: used to specify the port range of the external connection. The default value is 32768 to 61000. Here, it is set to 1024 to 65536. Net. Core. rmem_max: specifies the maximum buffer size of the received socket, in bytes. Net. Core. wmem_max: This file specifies the maximum size of the buffer for sending sockets, in bytes. Net. ipv4.tcp _ rmem: this parameter corresponds. ipv4.tcp _ WMEM is used to optimize the TCP receiving/sending buffer, which contains three integers: min, default, Max: tcp_rmem: min indicates the minimum amount of memory reserved for TCP socket for receiving buffering. Default indicates the default amount of memory reserved for TCP socket for receiving buffering, and Max indicates the maximum amount of memory reserved for TCP socket receiving buffering. Tcp_wmem: min indicates the Minimum Memory value reserved for the TCP socket for sending buffering. Default indicates the default memory value reserved for the TCP socket for sending buffering, and Max indicates the maximum memory value reserved for sending buffering by the TCP socket. Net. ipv4.tcp _ fin_timeout: this parameter is used to reduce the time in the fin-wait-2 connection state, so that the system can process more connections. This parameter is an integer in seconds. For example, during a TCP session, when the session ends, a first sends a FIN packet to B. After obtaining the ACK validation packet of B, A enters the fin wait2 status and waits for the fin package of B, and then sends an ACK confirmation package to B. This parameter is used to set the timeout time for a to enter the fin wait2 status and wait for the other fin package. If you do not receive the fin package from the other party after the time is reached, the session will be automatically released. Net. Core. netdev_max_backlog: this parameter indicates the maximum number of packets that can be sent to the queue when each network interface receives packets at a rate faster than the rate at which the kernel processes these packets. Net. ipv4.tcp _ syncookie: Indicates whether to enable the SYN Cookie function. tcp_syncookies are a function that helps protect the server from syncflood attacks. The default value is 0. Here it is set to 1. Net. ipv4.tcp _ max_orphans: indicates the maximum number of TCP sockets in the system that are not associated with any user file handle. If this number is exceeded, the orphan connection is reset and a warning is output. This restriction is only used to prevent simple DoS attacks. The value cannot be too small. Set this parameter to 262144. Net. ipv4.tcp _ max_syn_backlog: indicates the length of the SYN queue. The default value is 1024. Here, the queue length is set to 262144 to accommodate more waiting connections. Net. ipv4.tcp _ synack_retries: this parameter is used to set the number of SYN + ACK packets sent before the kernel abandons the connection. Net. ipv4.tcp _ syn_retries: this parameter indicates the number of SYN packets sent before the kernel abandons the connection. Add the preceding content to the/etc/sysctl. conf file, and run the following command to make the settings take effect: [root @ Varnish-server ~] # Sysctl-P Ii. Optimize System ResourcesAssume that 10 users have logged on to a Linux host and opened 500 documents at the same time without restrictions on system resources, assuming that the size of each document is 10 MB, the system's memory resources will be greatly challenged. Without memory restrictions, it will inevitably lead to confusion in the use of system resources, and the actual application environment is much more complex than this assumption. In this case, ulimit will be used. It is a simple and effective way to implement resource restrictions. Ulimit can restrict all aspects of the system. By limiting the resources occupied by Shell startup processes, ulimit can reasonably utilize and allocate system resources. ulimit supports the following types of restrictions: the size of the created Kernel File, the size of the Memory Lock, the size of the resident memory set, the size of the process data block, the number of opened file descriptors, the maximum virtual memory that the shell process can use, the size of the file created by the Shell Process, the maximum size of the stack allocated, the maximum number of threads for a single user, and the CPU time. It also supports restrictions on hard and soft resources. Ulimit has two implementation methods: temporary limit and permanent limit. It can restrict shell sessions that are logged on by using its command line and end the limit at the end of the session, it does not affect other shell sessions. For long-term fixed restrictions, the ulimit command can be added to the configuration file of the logon shell, which permanently limits the resources occupied by the Shell startup process. Ulimit uses the following format: ulimit [Options] [value] specific options meaning and simple examples, as shown in Table 1: Table 1

After understanding the meaning and usage of ulimit, you can set the varnish system. The parameter values here are as follows, but this value cannot be generalized, you need to select the appropriate value based on the application environment: ulimit-HSN 131072 ulimit-hsunlimited to ensure that this restriction takes effect permanently, it is best to put the ulimit settings in the varnish STARTUP script. Iii. varnish Parameter OptimizationTelnet to varnish's 3500 Management port and run "Param. the show command shows all the parameters in varnish running. You can also change the parameters in this way. There are many parameters in varnish running, here, we will only introduce several parameters that have a significant impact on varnish performance. For more information, see the official documentation. First, let's look at the following four parameters: thread_pools 4 [pools] thread_pool_min 50 [threads] thread_pool_max 5120 [threads] thread_pool_timeout 10 [seconds] thread_pools: used to set the number of thread pools, it is generally considered that this value is the same as the number of system CPUs. If too many pools are set, varnish's concurrent processing capability will be stronger, but it will also consume more CPU and memory. Thread_pool_min: used to set the minimum number of threads for each pools. When pools receives available requests, the requests are allocated to Idle threads for processing. Thread_pool_max: indicates the maximum total number of threads corresponding to all pools. This value cannot be too large. It can be set to around 90% of the system peak value. If it is too large, the Hung process will survive. Thread_pool_timeout: indicates the timeout and expiration time of threads. When the number of threads exceeds the value set by thread_pool_min, the thread will be released when the threads idle time exceeds the time set by thread_pool_timeout. There are also two parameters: lru_interval 20 [seconds] listen_depth 1024 [connections] lru_interval: this is a time parameter, this means that if an object exceeds the time set by this parameter in the memory and is not reused, it will be removed from the LRU (least recently used) queue. This is actually a common algorithm of the cache system. setting this time properly can improve the system running efficiency. Listen_depth: this parameter sets the length of the TCP connection queue. Setting a large value can improve the concurrent processing capability. The best way to optimize these parameters is to add them to the varnish STARTUP script and use the varnishd "-P" parameter to call them.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.