High performance Web cache accelerator---Varnish

Source: Internet
Author: User
Tags builtin response code varnish

Web caching is an information technology for temporary caching of Web pages documents, and Web caching can effectively reduce bandwidth usage and server load and improve the user experience. Why do we need a cache? The empirical study found that the operation of the program has two aspects of local characteristics:

Time locality: The chances of data being accessed again are great.

Spatial locality: The probability that the surrounding data may be accessed is significant

Depending on the local characteristics, the accessed resource Io is loaded into the high-speed storage, no longer loaded from the source location, and the surrounding data is loaded into the high-speed storage, speeding up the resource access. Caching is the use of high-speed devices and good structural design to speed up access to resources.

One, Web caching

Web cache accelerators are often applied to the scheduler, before the actual server or other proxy server, accept the user request, the hit will also need to detect the availability of the cache, if it is available to build the corresponding message should be client, if there is no hit or cache is not available, you do as the reverse server to request the real server side, Depending on the settings, the result is cached or not cached, and the request is returned to the client as well. So varnish is also a Web reverse proxy server.

The Web cache control mechanism mainly relies on the HTTP request packet header control, such as the introduction of the Exppires message header in http/1.0, which confirms an expiration time, the cache expiration meaning is more than the cache time then the request is forwarded to the real server to request, and in this cache; and http/ The Cache-control series headers are introduced in 1.1, which have the following content:

Source Header Significance
Request No-cache can be cached, but before the response to the original server to confirm

No-store Not cacheable

Max-age Maximum cache length
Reponse S-max-age Public cache Duration

Public Identify cacheable public items

Must-revalidate Must be re-verified

You need to use conditional request headers for validation, such as

If-modified-since: Determines whether a file has been updated based on the timestamp of the most recent modification of the original content;

If-unmodified-since: Determines whether the file has not been updated based on the timestamp of the last modification of the original content;

There may be some content, the refresh time is very short, the cache based on timestamp is not valid, then based on the ETag, the content check code, and then see if it matches. Based on ETag validation.

If-match: ETag-based comparison to determine if the same

If-none-match: The comparison based on the etag is not the same as the judgment;

Validation of freshness detection mechanism (revalidate): Generally speaking, conditional request backend judgment:

(1) If the original content has not changed, then only the header (not with the body part), the response code is 304 (not Modified);

(2) If the original content has changed, then the normal response, the response code is 200;

(3) If the original content disappears, the response is 404, and the cache entries in the cache should also be deleted;

Second, the installation configuration of varnish

The varnish installation package is located in the Epel source, to configure the Yum source to contain the Yum source, after installation look at its installed files:

The main configuration files are:

/etc/varnish/varnish.params # run config file

/ETC/VARNISH/DEFAULT.VCL # Process Control file, which is defined in VCL language

/etc/varnish/secret # Pre-shared key file for server and Varnishadm command lines

First introduce the configuration in the following varnish.params:

# varnish.params configuration file

# set this to 1 to make systemd reload try to switch  vcl without restart. reload_vcl=1         #  specified 1 o'clock, sytemd reload varnish   Try to replace the VCL file for the new configuration to take effect # main configuration file. you probably want to  Change it. varnish_vcl_conf=/etc/varnish/default.vcl    #  default VCL file location, can also be changed--# and ipv6  interfaces, otherwise specify a host name, an IPv4 dotted#  Quad, or an ipv6 address in brackets. varnish_listen_address=192.168.1.5        #  listening address, default to all Varnish_ listen_port=80                  #  Cache listening ports, support port mappings in Ha,nginx, all can be random ports on # admin interface listen address and  portvarnish_admin_listen_address=127.0.0.1      #  the listener address of the management interface Varnish_admin_listen_ port=6082           #  Port # shared  of the management interface secret file for admin interfacevarnish_secret_file=/etc/varnish/secret      #  Pre-shared password file # backend storage specification, see storage types  in the varnishd (5) # man page for details. #VARNISH_STORAGE = "file,/var/lib/ varnish/varnish_storage.bin,1g "varnish_storage=" malloc,128m "             #  cache settings, available only for file and malloc,                                 # malloc[,size]                                 # file[,path[, size[,granularity]]],granularity  Grain Size # default ttl used when the backend  does not specify oneVARNISH_TTL=120                       #  Default Cache Duration # user  and group for the varnishd worker processesvarnish_user=varnishvarnish_group= VARNISH# OTHER OPTIONS, SEE THE MAN PAGE VARNISHD (1)  # DAEMON_ Opts= "-p thread_pool_min=5 -p thread_pool_max=500 -p thread_pool_timeout=300"                #  thread pool number, default, maximum, minimum, These are run-time parameters, and when used, remove the "#"

You can see from its service profile that each name in the file is configured with the macro variable name, where the last daemon specifies that varnish can specify multiple run-time configuration options with the-P name=value format.


Third, VCL varnish configuration language

VCL allows the user to define the cache policy, through the manager process analysis into C code, the VCL compiler to use the C compiler to compile its binary code, connect to the child process to take effect. The definition strategy of VCL is mainly implemented through several so-called cache processing processes that are attached to varinish within varnish.

There are multiple state engines in VCL, and there are correlations between States, but they are isolated from each other; each engine uses return (x) to exit the current state and into the next state;

(1) The request is cacheable:

(a) Hit: response via local cache;

(b) Missed: Access to the backend services to obtain the appropriate content;

cacheable objects: Cache and then respond;

Cache:

Defines the cache time;

Custom cache keys;

Non-cacheable object: Not cached and responds directly;

(2) The requested is not cacheable:

(a) to the backend server to obtain the corresponding content;

VCL Syntax:

(1)//, #,/*...*/: Comment,; end as statement

(2) Sub $name: Define subroutine;

(3) does not support the loop, supports the condition judgment;

(4) There are built-in variables;

(5) Return with termination statement, no return value;

(6) Operator: =, = =,! =, ~, &&, | |

There are some programming experiences, learning VCL can be very quick, without this experience, can only more energy. Here are some simple examples of VCL.

Four, Varnish Foundation running example

The default state of varnish has its default state transition mechanism, so even if you do not define it, varnish can be run.

The basic logical topology of the experiment here is as follows:


Example one: Record cache hit condition to HTTP message header

The configuration file looks like this:

backend default {    .host =  "192.168.0.133"; #  Back-end host IP address     .port =  ";          "   #  Back-end host port, can address map}...sub vcl_deliver {    # happens  When we have all the pieces we need, and are about to  send the    # response to the client.     #    # you can do accounting or modifying the  final object here.       if  (obj.hits>0) {                 set  Resp.http.x-cache= "hit via " + " " +server.ip ;         }else {     &Nbsp;          set resp.http.x-cache= "MISS Via   "+"   " + SERVER.IP;        }}

Here you can use VARNISHADM to dynamically load change varnish cache behavior,

Varnishadm [-t timeout] [-s secret_file] [-t Address:port]

-T Timeout

-S specify pre-shared key secret file

-t specifies varnish host address, can omit port

# varnish -s /etc/varnish/secret -t 127.0.0.1:6082 vcl.list200                                     #  response? available        0 boot      #  usage status (active in use, Available available) The last binary file name Active          0 reload_ 2016-05-22t01:41:40vcl.show boot200        ## this is  an example VCL file for Varnish.## It does not do  Anything by default, delegating control to the# builtin vcl. the  builtin vcl is called when there is no explicit# return  statement.....vcl.load test1 default.vcl                 #  compile VCL file, use default DEFAULT.VCL here, can change 200        vcl  compiled.vcl.use test1200        vcl  ' test1 '  now  active                     #  dynamic links apply compiled binaries to child processes and modify their cache behavior vcl.list200         available       0 bootavailable        0 reload_2016-05-22T01:41:40active           0 test1                     #  Here you can see the cache changes

This modifies the DEFAULT.VCL compilation and uses:

Vcl.load test2 default.vcl200 vcl compiled.vcl.use test2200 vcl ' test2 ' now active

At this time to visit the Web page, with the Firefox debug terminal function F12, you can see the

X-cache= "hit Via" + "" +192.168.0.132

Example two: Set some pages not cached

Sub Vcl_recv {# Happens before we check if we have the this in cache already.    # # Typically you to the request here, removing cookie you don ' t need, # rewriting the request, etc.        if (req.url ~ "^/test.html$") {return (pass); }}

Request again test.html will never hit.

Example three: Forcibly set its cache duration for public picture resources by removing their private identities

Vcl_backend_response:if (Beresp.http.cache-control!~ "S-maxage") {if (bereq.url ~ "(? i) \.jpg$") {set beresp.ttl = 7200s; Unset Beresp.http.set-cookie;} if (bereq.url ~ "(? i) \.css$") {Set beresp.ttl = 3600s;unset Beresp.http.set-cookie;}}

The JPG image or CSS file is not cached at this time.

V. Other examples of VCL

Example one:

ACL purgers {"127.0.0.0"/8; " 192.168.0.0 "/16;} Sub Vcl_purge {return (synth (), "purged");} vcl_recv{... if (Req.method = = "PURGE") {if (!client.ip ~ purgers) {return (synth (405, "Purging not allow for" +cli    ENT.IP));    } return (purge); }   ...}

# Use Curl-x PURGE to specify the requested method; synth (code,reason_phrase); Return status code, state reason phrase.

Example two, load balancing

Import Directors;backend websrv1 {. host.port}backend websrv2 {. host.port}sub vcl_init {New Websrvs = Directors.rou Nd_robin (); Websrvs.add_backend (WEBSRV1); Websrvs.add_backend (WEBSRV2);} Sub Vcl_recv {Set req.backend_hint = Websrvs.backend (); ...}

When you need to access test.html that cannot be cached, you can see that the page performs an alternating display of the backend corresponding to the requested content based on the algorithm RR.

Example three, the back-end host to do state detection

Varnish for back-end hosts, you need to automatically cancel the invalid server, re-join the back-end host group when the server is again in effect

Backend Websrv1 {. host = "192.168.0.131";. Port = "80";. Probe = {. url = "/";. interval = 1s;.       window = 8;    # Number of detections. threshold = 5;     # Minimum number of successes. timeout = 2s; # timeout Time}backend websrv1 {. host = "192.168.0.133";. Port = "80";. Probe = {. url = "/";. interval = 1s;.       window = 8;    # Number of detections. threshold = 5;     # Minimum number of successes. timeout = 2s; # Time Out}

When the HTTP service of 133 server is stopped, varnish will no longer dispatch requests to 133 hosts;

When the 133 host starts the HTTP service and the service comes into effect again, Varnish sends the request to the 133 server again.


This article is from the "Deep Sea Fish" blog, please be sure to keep this source http://kingslanding.blog.51cto.com/9130940/1782958

High performance Web cache accelerator---Varnish

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.