Varnish Proxy Cache
I. Basic varnish knowledge
How Varnish works
1. How varnish works
The client request arrives at the varnish proxy. The accept in the child thread receives the request process and submits it to the worker threads for processing. The worker threads first goes to the object expiry to find the cache, and then goes to the upstream server backend lcatinon to find the resource, return to the varnish proxy to check whether the cache rules are met. If yes, It is cached. If no, it is directly returned to the client.
2. cache Classification
Proxy Cache: the client requests the proxy and first looks for the cache. If the cache does not exist, the proxy finds the resource on the upstream server and caches it on the proxy, and then returns it to the client.
Bypass cache: the client goes to the cache to find the cache. If the cache does not hit the client, the client goes to the upstream server to find the resource and returns it to the local location. Then, the client caches the resource to the cache.
3. Applicable scenarios of Memcache
Disadvantages of memcache: it cannot adapt to real-time updates. If real-time updates occur, the cache does not hit the cache and the hit rate is low.
Apsaradb for memcache supports distributed caching. memcache is not required for mysql master-slave databases. memcache is suitable for multiple mysql cluster environments. In this case, the query performance can be better directly obtained from the mysql cache.
4. Functions of varnish status engines:
Vcl_recv: implements a security policy. It only processes http methods that can be identified and only caches get and head methods. It does not cache user-specific data (Cache policies based on client requests)
Vcl_fetch: cache policies based on server responses
Vcl_pipe: used to send requests directly to the backend host;
Vcl_hash: data source when custom hash is generated
Vcl_pass: used to pass requests directly to the backend host;
Vcl_hit: the operation to be performed when the cached object is found in the cache;
Vcl_miss: the operation to be performed to find the cached object from the cache;
Vcl_deliver: method used to respond the content of a user request to the client;
Vcl_error: Cache Policy for merging error responses on the varnish side;
5. Principle of Varnish Cache
------------------------------------------------------------
Structural notes for Varnish Cache
Install and configure Varnish-5.8 in CentOS 2.1.5
The RedHat script uses the CentOS source to update and install Nginx, PHP 5.3, and Varnish.
Using Varnish to build Cache Server notes
Install and configure the cache service Varnish
Preparations for Varnish compilation and Installation
Configuration Optimization of Varnish cache in Linux
Varnish Basic Concepts
Bytes -----------------------------------------------------------------------------------
Ii. varnish experiments
Node1 172.16.11.143 centos6.5 + varnish
Node2 172.16.11.144 centos6.5 + http
1. Software Installation
Http://repo.varnish-cache.org/redhat/varnish-3.0/el6/x86_64/varnish/
Varnish-libs-3.0.5-1.el6.x86_64.rpm
Varnish-3.0.5-1.el6.x86_64.rpm
Varnish-docs-3.0.5-1.el6.x86_64.rpm
Node1
Rpm-ivh varnish-libs-3.0.5-1.el6.x86_64.rpm varnish-3.0.5-1.el6.x86_64.rpm
/Etc/logrotate. d/varnish # Rolling logs
/Etc/rc. d/init. d/varnish # service
/Etc/rc. d/init. d/varnishlog # log
/Var/lib/varnish # shared Cache
/Var/log/varnish # log storage
/Etc/varnish/default. vcl # configuration file
2. Simple proxy configuration
Node1
VARNISH_LISTEN_PORT = 80
VARNISH_STORAGE_SIZE = 64 M # Set the cache size
#
# Backend storage specification
# VARNISH_STORAGE = "file, $ {VARNISH_STORAGE_FILE}, $ {VARNISH_STORAGE_SIZE }"
VARNISH_STORAGE = "malloc, $ {VARNISH_STORAGE_SIZE}" # What is the cache name?
Vim/etc/varnish/default. vcl
Backend default {
. Host = "172.16.11.144 ";
. Port = "80 ";
}
Service varnish restart
Ss-tnl # Check whether the port is listening
Node2
Yum install httpd-y
Vim/var/www/html/index.html
<H1> node2
Service httpd restart
Chkconfig -- add httpd
Chkconfig httpd on
3. Enable the backend server to view the client's access
Node1
Cp/etc/varnish/default. vcl/etc/varnish/test. vcl
Vim/etc/varnish/test. vcl
Sub vcl_recv {
If (req. restarts = 0 ){
If (req. http. x-forwarded-){
Set req. http. X-Forwarded-For =
Req. http. X-Forwarded-For + "," + client. ip;
} Else {
Set req. http. X-Forwarded-For = client. ip;
}
}
Return (lookup );
}
[Root @ localhost ~] # Varnishadm-S/etc/varnish/secret-t fig: 6082
Varnish> vcl. list # view available
200
Active 2 boot
Available 0 test1
Varnish> vcl. load test1 test1.vcl # compile
200
VCL compiled.
Varnish> vcl. use test1 # use
200
Varnish> vcl. list # view available
200
Available 2 boot
Active 0 test1
Node2
Vim/etc/httpd/conf/httpd. conf
LogFormat "% {X-Forwarded-For} I % l % u % t \" % r \ "%> s % B \" % {Referer} I \ "\" % {User-Agent} I \ "" combined
Service httpd reload
######################################## ######################################## #####
Test http: // 172.16.11.143 in a browser
Node2
Tail/var/log/httpd/access_log
172.16.0.101--[04/Sep/2014: 16: 43: 27 + 0800] "GET/favicon. ico HTTP/1.1 "404 288"-"" Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/31.0.1650.63 Safari/537.36"
4. Set to check whether the cache is hit
Node1
Vim/etc/varnish/test. vcl
Sub vcl_deliver {
If (obj. hits> 0 ){
Set resp. http. X-Cache = "Hit Via" + "" + server. hostname;
} Else {
Set resp. http. X-Cache = "Miss Via" + "" + server. hostname;
}
Return (deliver );
}
[Root @ localhost ~] # Varnishadm-S/etc/varnish/secret-t fig: 6082
Varnish> vcl. load test2 test1.vcl
200
VCL compiled.
Varnish> vcl. use test2
200
Varnish> vcl. list
200
Available 0 boot
Available 2 test1
Active 0 test2
Access F12 in a browser
5. Precisely set the page without caching
If (req. url ~"~ /Test.html $ "){
Return (pass)
}
Compile and use
Vcl. load test3 test1.vcl
Vcl. use test2
Vcl. list
Browser Test
Http: // 172.16.11.143/test/index.html
Variable usage rules
6. the built-in variables are used in that status engine.
For more details, please continue to read the highlights on the next page: