Varnish Cache: high-performance reverse proxy server and HTTP accelerator

Source: Internet
Author: User
Tags varnish

1 Varnish Introduction

Varnish is a high-performance and open-source reverse proxy server and HTTP accelerator (cache server ). Its developer Poul-Henning Kamp is one of FreeBSD's core developers. Varnish adopts a new software architecture, which works closely with the current hardware system.

In addition to primary memory, the memory of the current computer system also includes CPU L1 cache, L2 cache, and even L3 cache. The hard disk is also cached, and Squid architecture makes it unable to achieve optimal access, but the operating system can implement this function. Therefore, this part of work should be handled by the operating system, this is the design architecture of Varnish Cache. Verdens Gang (vg. no) Three Varnish servers are used to replace the original 12 Squid servers, and the performance is better than before. This is one of Varnish's most successful application cases. Currently, Varnish can run on FreeBSD6.0/7.0, Solaris, and Linux 2.6 kernels.

Structural notes for Varnish Cache

Install and configure Varnish-5.8 in CentOS 2.1.5

The RedHat script uses the CentOS source to update and install Nginx, PHP 5.3, and Varnish.

Using Varnish to build Cache Server notes

Install and configure the cache service Varnish

Preparations for Varnish compilation and Installation

Configuration Optimization of Varnish cache in Linux

2 structural features of Varnish

Varnish stores data in the memory of the server. This mode is the most efficient, but the data will disappear after the restart. Officially, version 3.0 can solve this problem. Varnish can be set to 0 ~ Precise cache time of 60 seconds, but 32-bit machines support a maximum of 2 GB of cache files. Varnish adopts VCL configuration and has powerful management functions, such as top, stat, admin, and lis. Therefore, the management mode is flexible. Varnish's state machine design is not only clever, but also has a clear structure. Using the binary heap to manage cache files can be deleted at any time.

Compared with the traditional Squid, Varnish has many advantages such as higher performance, faster speed, and more convenient management:

Varnish adopts the "Visual Page Cache" technology. All cached data is directly read from the memory, while Squid reads the cached data from the hard disk, it prevents Squid from frequently exchanging files in memory and disk, and the performance is higher than Squid.

Varnish is more stable than Squid and has a low probability of downtime.

Through Varnish port management, you can use regular expressions to quickly and batch clear some caches, which is not available in Squid.

Varnish supports more concurrent connections. Because Varnish's TCP connection and release are faster than Squid, more TCP connections can be supported in High-concurrency connections.

Insufficient: When Varnish is in high concurrency, the overhead of CPU, I/O, memory, and other resources is higher than Squid. Once a Varnish process is suspended, crashed, or restarted, the cached data is released from the memory. At this time, all requests will be sent to the backend application server, which puts a lot of pressure on the backend server in the case of high concurrency.

3 Varnish Working Principle

Varnish is similar to General server software, which can be divided into master process and child process. The master process reads the storage configuration file, calls the appropriate storage type, creates/reads cache files of the corresponding size, then the master initializes the structure for managing the storage space, and then fork and monitors the child process. During the initialization of the main thread, the child process maps the previously opened storage file to the memory. At this time, the idle structure is created and initialized and mounted to the storage management structure to be allocated. The child process allocates several threads for work, including some management threads and many worker threads.

Next, start the real work. A varnish thread is responsible for receiving new HTTP connection threads and starting to wait for users. If a new HTTP connection comes, it is always responsible for receiving and then waking up a waiting thread, and give the specific processing process to it. The Worker thread reads the URI of the HTTP request and finds the existing object. If hit, it returns the result directly and replies to the user. If no hit is found, the requested content needs to be retrieved from the backend server, stored in the cache, and then replied.

The cache allocation process is as follows: it creates a cache file of the corresponding size based on the size of the read object. For ease of reading and writing, the program will change the size of each object to the Memory Page multiple closest to its size. Find the most suitable free storage block from the existing idle storage struct and assign it to it. If the idle block is not used up, the excess memory is made up of another idle block and mounted to the management struct. If the cache is full, release the oldest object based on the LRU mechanism.

The process of releasing the cache is as follows: there is a timeout thread that detects the lifetime of all objects in the cache. If the Time-To-Live parameter is not accessed, delete it, and release the corresponding struct and storage memory. Note: When the memory block is released, it will check the idle memory block before or after the storage block. If the idle memory at the front or back is continuous with the released memory, merge them into a larger memory.

The entire File Cache Management does not consider the relationship between files and memory. In fact, all objects are considered in the memory. If the system memory is insufficient, the system automatically switches it to the swap space without the varnish program.

The Varnish workflow is as follows:

 

For more details, please continue to read the highlights on the next page:

  • 1
  • 2
  • Next Page

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.