Varnish details and practices

Source: Internet
Author: User
Tags varnish

I. Introduction:

Varnish is a high-performance and open-source reverse proxy server and HTTP accelerator. It adopts a brand new software system and works closely with the current hardware system. Compared with the traditional squid, varnish is stable, efficient, and consumes less resources.

Varnish official site for https://www.varnish-cache.org, rpm, RPM package download location: http://repo.varnish-cache.org.

Ii. varnish structure features:

1. varnish Structure

Three major parts: client, varnish processing, and varnish logs.

 

2. varnish features:

Varnish mainly runs two processes: the management process and the child process (also called the cache process ). 650) This. width = 650; "style =" width: 674px; Height: 285px; "Title =" wkiol1nqu5pinnraaamzb1eklva629.jpg "src =" http://s3.51cto.com/wyfs02/M01/4A/4B/wKiom1QkHiqQmP9lAAM-l795Mp8500.jpg "width =" 1200 "Height =" 763 "alt =" wKiom1QkHiqQmP9lAAM-l795Mp8500.jpg "/>

The management process mainly implements new application configuration, compiling VCL, monitoring varnish, initializing varnish, and providing a command line interface. The management process detects the child process at intervals to determine whether it is running normally. If the child process does not receive a response within the specified duration, Mangagement restarts the child process.

The child process contains multiple types of threads. The key points are:

Acceptor process: accept new connection requests and respond

Worker Process: The child process starts a worker process for each user. Therefore, hundreds or more worker processes may occur in highly concurrent scenarios.

Object expiry process: clears expired content from the cache

Varnish relies on the workspace to reduce the possibility of contention when threads apply for or modify memory. There are multiple different workspaces in varnish, the most critical of which is the session workspace used to manage session data.

3. varnish log:

The varnish shared memory size is generally 90 MB, which is divided into two parts: the counter and the data requested by the client. Varnish analyzes information in shared memory logs through varnishlog, varnishncsa, varnishstst, and so on, and displays the information in a specified manner.

4. disadvantages of varnish:

In high concurrency, varnish consumes more resources than squid, such as CPU, I/O, and memory. Once a varnish process is suspended, crashed, or restarted, the cached data is released from the memory. At this time, all requests will be sent to the backend application server, which puts a lot of pressure on the backend server in the case of high concurrency.

Iii. Working Principle and workflow of varnish

Working principle:

1. varnish, like general server software, is divided into Master process and child process. The master process reads the storage configuration file, calls the appropriate storage type, creates/reads cache files of the corresponding size, then the master initializes the structure for managing the storage space, and then fork and monitors the child process. During the initialization of the main thread, the child process maps the previously opened storage file to the memory. At this time, the idle structure is created and initialized and mounted to the storage management structure to be allocated. The child process allocates several threads for work, including some management threads and many worker threads.

2. Start real work. A varnish thread is responsible for receiving new HTTP connection threads and starting to wait for users. If a new HTTP connection comes, it is always responsible for receiving and then waking up a waiting thread, and give the specific processing process to it. The worker thread reads the URI of the HTTP request and finds the existing object. If hit, it returns the result directly and replies to the user. If no hit is found, the requested content needs to be retrieved from the backend server, stored in the cache, and then replied.

3. the cache allocation process is as follows: it creates a cache file of the corresponding size based on the size of the read object. For ease of reading and writing, the program will change the size of each object to the Memory Page multiple closest to its size. Find the most suitable free storage block from the existing idle storage struct and assign it to it. If the idle block is not used up, the excess memory is made up of another idle block and mounted to the management struct. If the cache is full, release the oldest object based on the LRU mechanism.

4. the cache release process is as follows: there is a timeout thread that detects the lifetime of all objects in the cache. If the time-to-live (TTL) value is not accessed, delete it and release the corresponding struct and storage memory. Note: When the memory block is released, it will check the idle memory block before or after the storage block. If the idle memory at the front or back is continuous with the released memory, merge them into a larger memory.

5. The entire File Cache Management does not consider the relationship between files and memory. In fact, all objects are considered in the memory. If the system memory is insufficient, the system automatically switches it to the swap space without the varnish program.

Official workflow:

650) This. width = 650; "style =" width: 617px; Height: 821px; "Title =" wkiom1nqvhsred_taak8gtftr28587.jpg "src =" http://s3.51cto.com/wyfs02/M01/4A/4A/wKiom1QkHf6z7JFgAAK9XEjvvqs865.jpg "width =" 799 "Height =" 1200 "alt =" regular "/>

Iv. varnish practices:

1. Topology:

650) This. width = 650; "Title =" qq 40925214836.png "src =" http://s3.51cto.com/wyfs02/M00/4A/4C/wKioL1QkHj2x31deAACFPh5Wq7A334.jpg "alt =" wkiol1qkhj2x31deaacfph5wq7a334.jpg "/>

2. installation and configuration:
# Package: http://repo.varnish-cache.org/redhat/varnish-4.0/el6/

Yum-y install varnish-3.0.5-1.el6.x86_64.rpm varnish-docs-3.0.5-1.el6.x86_64.rpm varnish-libs-3.0.5-1.el6.x86_64.rpm Vim/etc/sysconfig/varnish # edit the configuration file, change the following varnish_listen_port = 80 # varnish listening port to port 80 varnish_storage_size = 64 m # This value is adjusted according to your own situation. The test environment can be adjusted to lower the value of varnish_storage = "malloc, $ {varnish_storage_size} "# Use malloc (memory) as the cache object storage method; Service varnish start # Start varnishvarnishadm-S/etc/Varnish/secret-T 127.0.0.1: 6082 # log on to the management command line varnish> VCL. list # list all configurations of varnish> VCL. load test1. /test. VCL # Load and compile the new configuration. test1 is the configuration name, test. VCL is the configuration file varnish> VCL. use test1 # when using the configuration, you must specify the configuration name. The current configuration is used as the last VCL. use is varnish> VCL. show test1 # display configuration content. You must specify the configuration name.

 

3. Main Configuration File Analysis

Vim/etc/Varnish/default. VCL # This is a basic VCL configuration file for varnish. see the VCL (7) # man page for details on VCL syntax and semantics. # default backend definition. set this to point to your content # server. # import directors; # import the directors module; probe backend_healthcheck {# create a health monitoring mechanism ;. url = "/health.html ";. window = 5 ;. threshold = 2 ;. interval = 3 S;} backend web1 {# create a backend host ;. host = "192". 168.1.4 ";. port = "80 ";. probe = backend_healthcheck;} backend web2 {. host = "192.168.1.5 ";. port = "80 ";. probe = backend_healthcheck;} backend img1 {. host = "192.168.1.4 ";. port = "4040 ";. probe = backend_healthcheck;} backend img2 {. host = "192.168.1.5 ";. port = "4040 ";. probe = backend_healthcheck;} sub vcl_init {# create a backend Host group, that is, directors; new web_cluster = directors. random (); web_cluster.add_back End (web1, 1.0); web_cluster.add_backend (web2, 1.0); New img_cluster = directors. random (); img_cluster.add_backend (img1, 1.0); img_cluster.add_backend (img2, 1.0);} ACL purgers {# define accessible source IP address; "127.0.0.1"; "192.168.0.0"/24 ;} sub vcl_recv {If (req. method = "get" & req. HTTP. cookie) {# Get requests with the cookie header are also cached; Return (hash);} If (req. URL ~ "Test.html") {# disable caching of test.html files; Return (PASS);} If (req. Method = "purge") {# Processing of purge requests; If (! Client. ip ~ Purgers) {return (Synth (405, "method not allowed");} return (hash);} If (req. HTTP. x-forward-For) {# Add X-forward-for header for requests sent to the backend host; Set req. HTTP. x-forward-for = req. HTTP. x-forward-for + "," + client. IP;} else {set req. HTTP. x-forward-for = client. IP;} If (req. HTTP. host ~ "(? I) ^ (www .)? Lnmmp .com $ ") {# distribute data to different backend host groups based on different access domain names; Set req. HTTP. host = "www.lnmmp .com"; Set req. backend_hint = web_cluster.backend ();} elsif (req. HTTP. host ~ "(? I) ^ images.lnmmp .com $ ") {set req. backend_hint = img_cluster.backend ();} return (hash);} sub vcl_hit {If (req. method = "purge") {# Purge request processing; purge; Return (Synth (200, "purged") ;}} sub vcl_miss {If (req. method = "purge") {# Purge request processing; purge; Return (Synth (404, "not in cache") ;}} sub vcl_pass {If (req. method = "purge") {# Processing of purge requests; Return (Synth (502, "purge on a passed object") ;}} sub vcl_ba Ckend_response {# customize the cache duration of the cached file, that is, the TTL value; If (bereq. url ~ "\. (JPG | JPEG | GIF | PNG) $") {set beresp. TTL = 7200 s;} If (bereq. url ~ "\. (HTML | CSS | JS) $ ") {set beresp. TTL = 1200 s;} If (beresp. HTTP. set-cookie) {# defines that the backend response with the set-Cookie header is not cached and is directly returned to the client; Return (deliver) ;}} sub vcl_deliver {If (obj. hits> 0) {# Add the X-Cache header to the response to show whether the cache hits. Set resp. HTTP. x-Cache = "hit from" + server. IP;} else {set resp. HTTP. x-Cache = "miss ";}}

 

4. Test results:

650) This. width = 650; "style =" width: 657px; Height: 173px; "Title =" qq 40925214854.jpg "src =" http://s3.51cto.com/wyfs02/M02/4A/4C/wKioL1QkHpWTA-cDAAEbftskjis556.jpg "width =" 824 "Height =" 224 "alt =" wKioL1QkHpWTA-cDAAEbftskjis556.jpg "/>

This article is from the Gentoo blog, please be sure to keep this source http://linuxgentoo.blog.51cto.com/7678232/1558263

Varnish details and practices

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.