High-performance cache accelerator Varnish (concept)

Source: Internet
Author: User
Tags varnish
High-performance cache accelerator Varnish (concept) I. varnish Introduction

Varnish is a high-performance open-source HTTP accelerator. Now many portal websites have deployed varnish and have a good response. Even the response is more stable than squid, with higher efficiency and fewer resources for the moment.

The author poul-Henning Kamp is one of FreeBSD's kernel developers. Varnish adopts a new software architecture that works closely with current hardware submissions. In 1975, there were only two storage media types: memory and hard disk. However, in addition to primary memory, the memory of the computing machine system also includes L1, L2 in the CPU, and even L3 cache. The Squid cache has its own caching device on the hard disk, so it is impossible for the architecture of Squid cache to handle the replacement of objects on its own to know these conditions, but the operating system can know these situations, therefore, this part of work should be handed over to the operating system, which is the varnish cache design architecture.

Verdens gang, Norway's largest online newspaper, uses three varnish instances to replace the original 12 squids, delivering better performance than before, which is varnish's most successful use case.

Ii. Features of varnish

Varnish is a lightweight cache and reverse proxy software. Advanced design concepts and sophisticated design frameworks are the main features of varnish. Currently, varnish has a small amount of code. Although its functions are constantly improved, it still needs to be enriched and enhanced. The following are some features of varnish:

1. cache based on memory, data will disappear after restart

2. Good I/O performance using virtual memory

3. supports setting 0 ~ Precise cache time of 60 seconds

4. Flexible VCL Configuration Management

5. the maximum size of cached files on 32-bit machines is 2 GB.

6. Powerful management functions, such as top, stat, admin, and list

7. Clever state machine setting and clear structure

8. Using the binary heap to manage cached files can actively delete the files.

Iii. Comparison between varnish and squid

Squid is a high-performance proxy cache server. It has many similarities and differences with varnish, as follows:

1. Similarities:

1) is a reverse proxy server.

2) All are open source software.

2. differences are also the advantages of varnish:

1) varnish is highly stable. When the two are working at the same load, the probability of squid Server failure is higher than that of varnish, because squid must be restarted frequently.

2) varnish provides faster access. varnish adopts the "Visual page cache" technology. All cached data is directly read from the memory, while squid is read from the hard disk, therefore, varnish is faster in terms of access speed.

3) varnish supports more concurrent connections because varnish TCP connections are released faster than squid. Therefore, more TCP connections are supported in highly concurrent connections.

4) varnish can clear some caches in batches using regular expressions through the Management port, but squid cannot.

5) squid indicates that a single process uses a single CPU, but varnish opens multiple processes in the fork form for processing. Therefore, it is reasonable to use all cores to process the corresponding requests.

Iv. Limitations and Solutions of varnish

1. varnish Deficiency

Compared with the traditional squid, varnish has the following Disadvantages:

1) Once the varnish process is suspended, crashed, or restarted, the cached data will be completely released from the memory. At this time, all requests will be sent to the backend server. In high concurrency, it puts a lot of pressure on the backend server.

2) In varnish usage, if a request from a single URL passes through HA/F5 (Server Load balancer) to a different varnish server, the requested varnish server will be penetrated to the backend, in addition, the same request will be cached on multiple servers, which will also cause a waste of varnish cache resources and performance degradation.

2. Solutions

To sum up, we recommend that you use varnish's memory cache mode for startup when there is a large traffic volume, and you need to follow up with multiple squid servers. To prevent the previous varnish service and the server from being restarted, the squid can be used as the second layer cache in the early stage, it also makes up for the varnish cache to be released when it is restarted in the memory. This problem can be solved by performing URL hash on the server Load balancer so that a single URL request can be fixed to a varnish server.

V. varnish Workflow

Varnish starts two processes: the master (Management) process and the child (worker) process. The master reads the storage configuration command, initializes it, And then fork to monitor the child. Child allocates a thread for cache, and child also manages the thread and generates many worker threads.

During the initialization of the main thread of the child process, the large files in the storage are loaded into the memory. If the file exceeds the virtual memory of the system, the size of the configured MMAP is reduced and the file continues to be loaded, at this time, the idle storage struct is created and initialized and placed in the storage management struct, waiting for allocation.

Then a varnish thread responsible for interface new HTTP connection starts to wait for the user. If there is a new HTTP connection, but this thread is only responsible for receiving, and then wakes up the work thread in the waiting thread pool, request Processing.

After the worker thread reads the URI, it will find the existing object and return the result directly. If the hit is not hit, it will be retrieved from the backend server and put in the cache.

If the cache is full, the old object will be released based on the LRU algorithm. For the release of the cache, there is a timeout thread that will detect the lifecycle of all objects in the cache. If the cache expires (TTL), it will be deleted and the corresponding storage memory will be released.

 


This article from the "happy life" blog, please be sure to keep this source http://tyjhz.blog.51cto.com/8756882/1440203

High-performance cache accelerator Varnish (concept)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.