The HTTP cache is implemented in two ways: Squid, Varnish; generally known as a cache server
Squid: Supports forward and reverse proxies, is a heavyweight cache server with very stable performance under high load conditions
Varnish: Support reverse proxy; Compared to squid is a very lightweight cache server, under high load conditions, poor performance, no squid stability; generally referred to as the HTTP accelerator;
Features of Varnish:
I. Components
Management: As a master process; provides command line interface, manages various sub-processes, initializes, loads files, etc.
Child/cache: Cache management, log data statistics, accept user requests, perform validation on all caches, clean up
Vcl:varnish configuration language; cache control mechanism, call C compiler, generate binary program, let child process load
Second, the mode of operation:
1. The requested resource can be cached:
(1) When a client requests a resource that hits the varnish cache and the cache is not expired, it responds directly to the client;
(2) Varnish as a reverse proxy cache server, when a client requests a resource that is not hit varnish cache, varnish acts as a client role to the backend host to request data, caches the requested resource first locally, and then sends the requested resource to the client.
(3) When a client requests a resource that has hit the varnish cache, but the cache on the varnish has expired, varnish will go to the backend host to confirm that the original resource was updated, and if not, the backend host responds only to the resource header to varnish. The varnish then sends the response resource to the client, and if the original resource is requested to be updated, the backend host responds to the new resource to the Varnish,varnish to cache the resource that responds locally and then responds to the client, if the requested resource backend host has been deleted. The cache in the varnish is also deleted, responding to the client 404 page
2. The requested resource is not cacheable:
(1) When the client requests the resource is not cacheable, varnish will go directly to the backend host to get the corresponding content directly to the client, instead of caching to local and then responding to the client
Measure the validity of a cache:
Cache Hit Ratio:
Document hit Ratio: Also known as a resource hit rate, measured from the number of documents
Byte hit ratio: measured from the size of the content
Cache life Cycle:
Cache entry Expiration: Lazy cleanup
Out of cache space: using the LRU algorithm (least recently using algorithms)
Note: When a cache entry expires, it is not really from the in-memory youth brigade, but is marked as invalid, not deleted, and the cache entry is actually deleted when the memory space is exhausted
Three, Cache mode:
(1) File: Cache re-disk files, self-managed critical systems
(2) malloc: Memory cache; use malloc () library call to cache a specified amount of space at varnish startup for a memory request
(3) presistent: stored on disk;
Iv. Cache freshness Detection mechanism
Validity in the validation:
If the original content changes, only the header (not the body part; The response code is 304 (not Modified)
If the original content has changed, it responds normally to the client with a response code of 200
If the original content disappears, the response is 404, and the cache entry (object) in the cache should also be deleted
Conditional Request Header:
If-modified_since: Validation based on the last modified timestamp of the original content
If-ummodified_since: If the cache has changed, the response 304
If-match: Validation based on matching of request content
If-none_match: Content verification based on ETag comparison
Five, cache control mechanism
Expiry date:
http/1.0 Expires: The absolute length of time is given
For example: expires:thu, June 23:38:18 GMT
http/1.1 cache-control:max-age: Given is relative duration, starting from receiving to corresponding message
Example: cache-control:max-age=700
Six, VCL-related state engine
VCL_RECV: Accept Request
Vcl_hash:
Hit:vcl_hit: Hit Cache
Miss:vcl_miss: Cache not hit
Purge:vcl_purge: Clear Cache
Pipe:vcl_pipe: pipe; direct to backend host
Pass, Hit_for_pass:vcl_pass # #跳过, request back-end host directly
Vcl_backend_fetch: Reading data
Vcl_backend_response: Back-end host response
Vcl_backend_error: Back-end host response error code
Vcl_synth: Synchronizing
Vcl_deliver: Responding to clients
Seven, the syntax format of VCL
(1) Comment://, #,/*......*/
(2) Defining subroutines
(3) Do not support loop, support condition judgment
(4) with built-in variables
(5) Use termination statement return, no return value
(6) Support operator: =, = =,! =, &&, | |
Viii. built-in variables of VCL
Req.*: Matches only requests sent by the client
req.http.*: Matches any header of the HTTP request message
BEREQ.*: HTTP request sent by varnish back-end host
Beresp.*: Header of the response message sent by the backend host to HTTP
RESP.*: Response message for HTTP response to the client by varnish
obj.*: The corresponding property of the cache stored in cache space
Common variables:
Berep:varnish back-end host request resource
Bereq.request: Request method (send request message to back-end host)
Bereq.url: Requested URL
Bereq.proto: Request Agreement
Bereq.backend: Indicates the backend host to invoke is the
BERESP: Back-end host responds to varnish resources
Beresp.proto: Response protocol
Beresp.status: Status code of the response
Beresp.reason: Response Reason
Beresp.backend.name: Backend Host name
Beresp.ttl: The remaining lifetime of the content in the backend host request
Obj.hits: The number of times this object was hit from the cache
Obj.ttl: The TTL value of the object, also known as the cache duration
Preparation Environment: CentOS 7
Varnish server
|
ip:172.18.42.200
|
Back-end Host 1: Provide HTTPD services
|
ip:172.18.42.201
|
Back-end Host 2: Provide HTTPD services
|
ip:172.18.42.202
|
One, varnish server configuration
1, installation varnish
[[email protected] ~]# yum install varnish[[email protected] ~]# ll/etc/varnish/total 20-rw-r--r--1 root root 1155 May 2 0 17:07 DEFAULT.VCL # #配置Varnish缓存策略-rw-r--r--1 root root 1224 May 11:10 default.vcl.wtc-rw-------1 root root Notoginseng Ma Y 11:09 secret-rw-r--r--1 root root (Mar) varnish.params # #配置缓存机制-rw-r--r--1 root x 20 11:1 2 Varnish.params.bak
2. Change the varnish cache mode:
[Email protected] varnish]# vim varnish.params #VARNISH_STORAGE = "file,/var/lib/varnish/varnish_storage.bin,1g" # # Default caching mechanism varnish_storage= "MALLOC,1G" # #更改缓存为malloc模式
3. Define back-end host
[Email protected] varnish]# Vim DEFAULT.VCL # #backend default {. Host = "172.18.42.202"; # #后端主机1. Port = "80"; # #web80端口}
Second, configure the back-end host
1, back-end Host 1 installation httpd, PHP
[[email protected] ~]# yum install httpd-y[[email protected] html]# systemctl start Httpd.service
2, back-end Host 2 installation httpd
[email protected] ~]# yum install nginx[[email protected] nginx]# Nginx
Third, configure the DEFAULT.VCL cache policy
1. When the request resource is cacheable, check to see if the cache is hit
Sub Vcl_deliver {if (obj.hits>0) {Set resp.http.x-cache = "hit"; } else {Set resp.http.x-cache = "MISS"; }} # #如果请求的资源未命中则显示MISS, hit will show hits
First time Request:
650) this.width=650; "src=" Http://s5.51cto.com/wyfs02/M01/80/68/wKioL1dAcE2BhLt7AABWCa7dkgA789.jpg "title=" 222. JPG "alt=" wkiol1dace2bhlt7aabwca7dkga789.jpg "/>
Second request: Hit the varnish cache
650) this.width=650; "src=" Http://s2.51cto.com/wyfs02/M01/80/6A/wKiom1dAbwqRkDH-AABdQ8JxTbo577.jpg "title=" 222. JPG "alt=" wkiom1dabwqrkdh-aabdq8jxtbo577.jpg "/>
2. Forcing the cache to be unchecked for certain resources
[Email protected] ~]# MKDIR/WEB/LWEIM/WTC # #在后端主机新目录 "WTC" [[email protected] ~]# vim/web/lweim/wtc/index.html Sub VCL_RECV {if (req.url ~ "(? i) ^/WTC") {return (pass); }} # #当请求的url当中有 "WTC", the cache is not hit, but is directly requested by the Varnish backend host
First time Request:
650) this.width=650; "src=" http://s5.51cto.com/wyfs02/M01/80/68/wKioL1dAdXnDCyg8AABmQv8kXAU463.jpg "title=" 444. JPG "alt=" wkiol1dadxndcyg8aabmqv8kxau463.jpg "/>
Second Request:
650) this.width=650; "src=" Http://s3.51cto.com/wyfs02/M00/80/6A/wKiom1dAdRei1oj3AABlZ4C8yfw079.jpg "title=" 111. JPG "alt=" wkiom1dadrei1oj3aablz4c8yfw079.jpg "/>
3. Define multiple back-end hosts
backend web2 { .host = "172.18.42.202"; .port = "80";} backend web1 { .host = "172.18.42.201"; . port = "80";} sub vcl_recv { if (req.url ~ "(? i) \.php$") { set req.backend_hint = web1; ## When PHP is available in the request URL, it is the backend host Web1 to respond return (pass); } else { set req.backend_hint = web2; ## Requests that do not have PHP in the URL are responded to by the backend host WEB2 return (pass); }}
Request static resources for the first time
650) this.width=650; "src=" Http://s3.51cto.com/wyfs02/M02/80/78/wKiom1dCrZngRHAuAABmWsflrhU854.jpg "title=" 111. JPG "alt=" wkiom1dcrzngrhauaabmwsflrhu854.jpg "/>
Second request PHP
650) this.width=650; "src=" Http://s4.51cto.com/wyfs02/M01/80/77/wKioL1dCrwTDnJSTAAClRv5XwpU328.jpg "title=" 111. JPG "alt=" wkiol1dcrwtdnjstaaclrv5xwpu328.jpg "/>
4. Achieve load Balancing
Import directors; # #定义一个主机组backend Web1 {# #定义第一个后端服务器. Host = "172.18.42.201"; . Port = "80";} Backend WEB2 {# #定义第二个后端服务器. Host = "172.18.42.202"; . Port = "80";} Sub Vcl_init {# #定义一个服务器组 new Web = Directors.round_robin (); Web.add_backend (WEB1); Web.add_backend (WEB2);} Sub Vcl_recv {if (req.url ~ "(? i) ^/WTC") {Set req.backend_hint = Web.backend (); return (pass); # #直接请求后端主机服务器}}
First time Request:
650) this.width=650; "src=" Http://s2.51cto.com/wyfs02/M02/80/7A/wKiom1dCta7DgpqJAABZO1KlCxQ412.jpg "title=" 555. JPG "alt=" wkiom1dcta7dgpqjaabzo1klcxq412.jpg "/>
Second Request:
650) this.width=650; "src=" http://s5.51cto.com/wyfs02/M00/80/78/wKioL1dCtr3iaNyTAABZZvVV5PY414.jpg "title=" 666. JPG "alt=" wkiol1dctr3ianytaabzzvvv5py414.jpg "/>
5, the back-end host to do health testing
Backend Web1 {. host = "172.18.42.201"; . Port = "80"; . Probe = {. url = "/"; . interval = 1s; # #每隔多长时间检测一次. window = 8; # #指定后端主机要检测多少次. Threshold 5; # #指定至少要有几次是检测成功的. timeout 2s; # #指定检测时间的超时时长; default is 2s}
6. Modify Varnish run-time command parameters
[[email protected] varnish]# varnishstatmain.pools 4 0.00 . 4.00[[email Protected] ~]# varnishadm -s /etc/varnish/secret -t 127.0.0.1:6082param.set thread_pools 2200 # #响应码为200即为修改成功 [[email protected] varnish]# varnishstatmain.pools 2 0.00 . 2.00
7. View varnish's working status and related information
[Email protected] varnish]# Varnishstat
650) this.width=650; "src=" Http://s2.51cto.com/wyfs02/M02/80/79/wKioL1dCvQXCzSzxAAAgrd9BP8A513.png "title=" 111. PNG "alt=" Wkiol1dcvqxczszxaaagrd9bp8a513.png "/>
This article is from the "WTC" blog, so be sure to keep this source http://wangtianci.blog.51cto.com/11265133/1782214
Varnish basic applications in Linux