Basic Varnish applications in Linux
Http cache can be implemented in two ways: Squid and Varnish.
Squid: it supports forward and reverse proxy and is a heavyweight cache server. It has stable performance under high load conditions.
Varnish: supports reverse proxy. Compared with Squid, It is a lightweight Cache Server with poor performance and no Squid stability under high load. It is generally called an http accelerator;
Varnish features:
I. Components
Management: acts as the master process, provides command line interfaces, manages various sub-processes, initializes, Loads files, and so on.
Child/Cache: Cache Management, log data statistics, accepting user requests, verifying and cleaning all caches
VCL: Varnish configuration language; Cache control mechanism, call C compiler, generate binary program, let sub-process load
Ii. Working Mode:
1. Requested resources can be cached:
(1) When the requested resources of the client hit the Varnish cache and the cache has not expired, the client will directly respond;
(2) Varnish acts as a reverse proxy cache server. When the resources requested by the client do not hit the Varnish cache, Varnish assumes the client role to request data from the backend host, cache the requested resource locally, and then send the requested resource to the client.
(3) When the requested resources of the client hit the Varnish cache, but the cache on Varnish has expired, Varnish will go to the backend host to check whether the requested original resources are updated. If not, the back-end host only responds to the resource header to Varnish, and then Varnish sends the response resource to the client. If the original resource is requested to be updated, the back-end host will respond to the new resource to Varnish, varnish caches the response resources locally and then responds to the client. If the requested resource backend host has been deleted, the cache in Varnish will also be deleted and the response will be sent to the client's 404 page.
2. Requested resources cannot be cached:
(1) When the requested resources of the client are non-cacheable, Varnish will directly go to the backend host to obtain the corresponding content and directly return it to the client, instead of caching it locally before responding to the client.
Measure the effectiveness of a cache:
Cache hit rate:
Document hit rate: Also known as the resource hit rate, measured by the number of documents
Byte hit rate: measured by the content size
Cache lifecycle:
Cache item Expiration: inert cleaning
Cache space exhausted: LRU algorithm (least recently used)
Note: When a cache item expires, it is not really from the memory, but marked as invalid, rather than deleted; when the memory space is exhausted, it will actually Delete the cache item
Iii. cache mode:
(1) file: The key system managed by the user in the files cached to the disk
(2) malloc: memory cache. It is called using the malloc () Library. When varnish is started, it applies to the memory for a specified size of space for caching.
(3) presistent: stored on disk;
Iv. cache freshness Detection Mechanism
Validity Verification:
If the original content changes, only the Response Header (Not to the body part; the response code is 304 (Not Modified)
If the original content changes, the response is sent to the client with a response code of 200.
If the original content disappears, the response is 404. The cache items (objects) in the cache should also be deleted.
Conditional request header:
If-Modified_Since: Verification Based on the timestamp of the last modification of the original content
If-Ummodified_since: responds to 304 if the cache changes
If-Match: Verification Based on request content matching
If-None_match: Content Verification Based on Etag comparison
V. Cache Control Mechanism
Expiration date:
HTTP/1.0 Expires: returns the absolute length of time.
Example: Expires: Thu, 04 Jun 2015 23:38:18 GMT
HTTP/1.1 Cache-Control: max-age: the relative length of time given, starting from receiving the corresponding message
Example: Cache-Control: max-age = 700
Vi. VCL-related status Engines
Vcl_recv: accept the request
Vcl_hash:
Hit: vcl_hit: hit Cache
Miss: vcl_miss: cache hit
Purge: vcl_purge: Clear Cache
Pipe: vcl_pipe: pipe; directly sent to the backend host
Pass, hit_for_pass: vcl_pass # Skip and directly request the backend host
Vcl_backend_fetch: Read data
Vcl_backend_response: backend host response
Vcl_backend_error: Error Code returned by the backend host
Vcl_synth: Synchronization
Vcl_deliver: returns to the client
VII. VCL syntax format
(1) Comment ://,#,/*...... */
(2) define a subroutine
(3) cycle and condition judgment are not supported.
(4) with built-in Variables
(5) use the end statement return, with no return value
(6) supported operators: =, =, and ,! =, &, |
8. VCl built-in Variables
Req. *: only requests sent from the client are matched.
Req. http. *: match any header of the http Request Message
Bereq. *: varnish sends an http request to the backend host.
Beresp. *: the header of the http Response Message sent from the backend host.
Resp. *: the http response packet sent by varnish to the client.
Obj. *: the cache attribute stored in the cache space.
Common variables:
Berep: varnish requests resources from the backend host
Bereq. request: request Method (Send request packets to the backend host)
Bereq. url: request url
Bereq. proto: Request Protocol
Bereq. backend: Specifies the backend host to be called.
Beresp: the backend host responds to resources from varnish.
Beresp. proto: Response Protocol
Beresp. status: status Code of the response
Beresp. reason: Response reason
Beresp. backend. name: backend Host name
Beresp. ttl: The remaining lifetime of the content in the request of the backend host
Obj. hits: The number of times this object hits from the cache
Obj. ttl: the ttl value of the object, also known as the cache duration.
Prerequisites: CentOS 7
Varnish Server |
IP: 172.18.42.200 |
Backend HOST 1: Provides httpd services |
IP: 172.18.42.201 |
Backend host 2: Provides httpd services |
IP: 172.18.42.202 |
Concepts of Cache Server Varnish
Concepts of Cache Server Varnish
Structural notes for Varnish Cache
Install and configure Varnish-5.8 in CentOS 2.1.5
The RedHat script uses the CentOS source to update and install Nginx, PHP 5.3, and Varnish.
Using Varnish to build Cache Server notes
Install and configure the cache service Varnish
Preparations for Varnish compilation and Installation
Configuration Optimization of Varnish cache in Linux
1. Varnish Server Configuration
1. Install varnish
12345678 |
[root@node0 ~] # yum install varnish [root@node0 ~] # ll /etc/varnish/ total 20 -rw-r--r-- 1 root root 1155 May 20 17:07 default.vcl # Configure the Varnish Cache Policy -rw-r--r-- 1 root root 1224 May 20 11:10 default.vcl.wtc -rw------- 1 root root 37 May 20 11:09 secret -rw-r--r-- 1 root root 1200 Mar 16 2015 varnish.params # Configure Cache Mechanism -rw-r--r-- 1 root root 1200 May 20 11:12 varnish.params.bak |
2. Change the varnish cache mode:
123 |
[root@node0 varnish] # vim varnish.params # VARNISH_STORAGE = "file,/var/lib/varnish/varnish_storage.bin, 1G" # default Cache Mechanism VARNISH_STORAGE= "malloc,1G" # Change cache to malloc Mode |
3. Define backend hosts
12345 |
[root@node0 varnish] # vim default.vcl ## backend default { .host = "172.18.42.202" ; # Backend HOST 1 .port = "80" ; # Web80 Port } |
Ii. Configure the backend host
1. Install httpd and php on the backend HOST 1
12 |
[root@node1 ~] # yum install httpd -y [root@node1 html] # systemctl start httpd.service |
2. Install httpd on the backend host 2
12 |
[root@node2 ~] # yum install nginx [root@node2 nginx] # nginx |
For more details, please continue to read the highlights on the next page: