Recently, the company to do activities to promote, traffic explosion, the back-end server pressure Alexander, resulting in the user's request response time to prolong, the customer complained loud.
In order to solve the problem as soon as possible, in arranging the staff to optimize the back-end code, consider adding varnish cache layer before nginx, only pass part of the dynamic request in the past, directly reduce the pressure of the back-end server.
In actual use, really feel the power of varnish server! After the constant tuning cache hit ratio, the backend server CPU directly from 80% to 20%, and then the large concurrent front end can also be directly digested, the back-end server indicates no pressure. With this thing, you can no longer write scheduled tasks in the background, and constantly regenerate static pages, the direct drop of the cache is finished! In addition, Varnish also supports a "sacred mode", when the back-end server error returns 500, the varnish can continue to return to the past cached content, for the user to block some errors, which is sometimes really a lifeline.
But at the same time, also trip to n many pits, varnish in the VCL language is too strong and flexible, slightly use bad will be shot. Most of the varnish profiles that are open online are a big copy and cannot be used directly for production. After studying for a few days, I looked through a lot of information, and then finally solved the problems we encountered.
The tuning experience is now recorded as follows:
First, Introduction
Varnish is a professional website caching software (in fact, the reverse proxy service with caching), which can cache the entire HTTP response content into memory or file, thereby increasing the response speed of the Web server.
The Varnish built-in powerful VCL (Varnish config Language) configuration language allows flexibility to adjust caching policies with various criteria. When the program starts, Varnish transforms the VCL into binary code, so the performance is very high.
Second, installation
There are also varnish in the Epel source, but the 2.x version.
Because the varnish 3.0 configuration file differs greatly from the 2.x, the varnish team can no longer update the software source in the Epel. If you want to install the latest version, it is recommended to use the RPM method.
RPM Installation
It is easy to install directly from the RPM package on the Redhat system server:
wget http://repo.varnish-cache.org/redhat/varnish-3.0/el6/x86_64/varnish/varnish-libs-3.0.4-1.el6.x86_64.rpm
wget http://repo.varnish-cache.org/redhat/varnish-3.0/el6/x86_64/varnish/varnish-3.0.4-1.el6.x86_64.rpm
wget http://repo.varnish-cache.org/redhat/varnish-3.0/el6/x86_64/varnish/varnish-docs-3.0.4-1.el6.x86_64.rpm
Yum Localinstall *.rpm
Installation and configuration path for varnish
/ETC/VARNISH/DEFAULT.VCL #默认配置文件存文件
/etc/sysconfig/varnish #服务启动参数脚本
/etc/init.d/varnish #服务控制脚本
You can adjust the startup parameters by adjusting the parameters in the/etc/sysconfig/varnish configuration file, setting the thread pool, caching to memory or files, and so on. Of course, if you'd like to, you can also start service and management manually by taking the startup parameters behind the varnishd.
You can now start varnish in a service way:
Service varnish Start (Stop/restart)
Set varnish to boot from:
Chkconfig Varnish on
Compiling the installation
Install dependent packages
Yum Install ncurses-devel.x86_64
This step is optional.
If you do not find the Varnishstat, Varnishtop, varnishhist three programs in the bin directory after compiling the varnish, it is because the ncurses-devel that corresponds to the number of operating system bits is not installed before compiling. These tools are very useful, so it is recommended that you install this dependency package first.
Installing Pcre
Varnish relies on pcre for URL regular matching.
CD pcre-8.12
./configure--prefix=/usr/local/
Make&&make Install
Compile
Unzip the varnish source package
wget http://repo.varnish-cache.org/source/varnish-3.0.4.tar.gz
Cd/root
TAR-ZXVF varnish-3.0.4.tar.gz
CD varnish-3.0.4
Export Pkg_config_path=/usr/local/lib/pkgconfig
./configure--prefix=/usr/local/varnish
Make && make install
Third, the implementation of the VCL process
First introduce the main processing method and process of varnish processing request
VCL needs to define several default functions, which are called back at each stage of the Varnish processing HTTP request:
VCL_RECV, request the entrance, determine whether to further processing, or directly to the back end (pass). In this process, you can use and request related variables, such as the Url,ip,user-agent,cookie of the client request, this process can be not required to cache the address, through the judgment (equality, inequality, regular matching and other methods) through to the backend, such as the POST request, and JSP, ASP, The dynamic content of do and other extensions;
Vcl_fetch, when the content from the back-end server gets into this phase, in addition to the client's request variables can be used, you can also use the information obtained from the backend (BERSP), such as the back end of the header information, specify the cache time TTL of this information;
Vcl_miss processing to be done when the cache misses
Vcl_hit Cache hit post-processing
Vcl_delever processing before sending to client
Vcl_pass to back-end servers
Vcl_hash setting the cached key value key
The process for the first request is as follows:
Recv->hash->miss->fetch->deliver
Request again after caching:
Recv->hash->hit->deliver (The process of the fetch is gone, this is what we do, save the page to be cached)
Directly to the backend pass case:
Recv->hash->pass->fetch->deliver (send the data directly from the back end to the client, at which point the varnish is the equivalent of a broker, only forwarding)
Four, through the log tuning
When the installation is complete, the default configuration file is located in
/etc/varnish/default.vcl
We can refer to the default configuration to learn the use of VCL language and to make continuous tuning.
But the direct modification of the configuration, the constant restart tuning efficiency very low pain! After constantly groping, I found that in fact varnish built-in log module, we can reference the STD library at the top of DEFALUT.VCL, in order to output the log:
Import STD;
Where output logs are required, the Std.log can be used:
Std.log ("log_debug:url=" + req.url);
In this way, you can learn varnish work flow through the log, very convenient optimization, more than 10 times times the efficiency!
Similar to what you want to keep track of which connections do not have a hit cache, you can write this in the Vcl_miss function:
Sub Vcl_miss {
Td.log ("URL Miss!!! Url= "+ req.url);
return (fetch);
}
After starting varnish, track the printed log with the Varnishlog tool
Varnishlog-i LOG
V. Load Balancing
Varnish can mount multiple back-end servers and perform weights, polling, and forwarding requests to the backend node to avoid a single point of problem.
Examples are as follows:
Backend Web1 {
. Host = "172.16.2.31";
. Port = "80";
. Probe = {
. url = "/";
. Interval = 10s;
. timeout = 2s;
. window = 3;
. threshold = 3;
}
}
Backend WEB2 {
. Host = "172.16.2.32";
. Port = "80";
. Probe = {
. url = "/";
. Interval = 10s;
. timeout = 2s;
. window = 3;
. threshold = 3;
}
}
# define load Balancing groups
Director Webgroup Random {
{
. backend = Web1;
. Weight = 1;
}
{
. backend = WEB2;
. Weight = 1;
}
}
Where the probe option is added to the backend, a health check can be performed on the backend nodes. If the backend node is unreachable, the node is automatically removed until the node is restored.
You need to be aware of window and threshold two parameters. When a backend server is unreachable, varnish will occasionally report a 503 error. The information found on the web is a thread group or something, and the test is completely invalid. It was later found that the 503 phenomenon did not occur as long as the values of the two parameters of window and threshold were set to the same.
Six, elegant mode and divine Mode
Grace mode
If the backend takes a long time to generate an object, there is a risk that a thread will accumulate. To avoid this situation, you can use Grace. He can have varnish provide an existing version, and then generate a new target version from the backend. When there are multiple requests coming in at the same time, varnish only sends a request to the backend server,
Set beresp.grace = 30m;
Copy the old request result to the client within the time.
Saint mode
Sometimes the servers are weird and they send out random errors, and you need to notify Varnish to handle it in a more elegant way, called divine mode. Saint mode allows you to discard a backend server or another tried back-end server or cache service stale content.
For example:
Sub Vcl_fetch {
if (Beresp.status = = 500) {
Set beresp.saintmode = 10s;
return (restart);
}
Set beresp.grace = 5m;
}
Vii. Complete Example
Import STD;
Backend Web1 {
. Host = "172.16.2.31";
. Port = "80";
. Probe = {
. url = "/";
. Interval = 10s;
. timeout = 2s;
. window = 3;
. threshold = 3;
}
}
Backend WEB2 {
. Host = "172.16.2.32";
. Port = "80";
. Probe = {
. url = "/";
. Interval = 10s;
. timeout = 2s;
. window = 3;
. threshold = 3;
}
}
# define load Balancing groups
Director Webgroup Random {
{
. backend = Web1;
. Weight = 1;
}
{
. backend = WEB2;
. Weight = 1;
}
}
# Allow cached IP to be refreshed
ACL Purgeallow {
"LocalHost";
"172.16.2.5";
}
Sub Vcl_recv {
# Refresh Cache Settings
if (req.request = = "PURGE") {
#判断是否允许ip
if (!client.ip ~ purgeallow) {
Error 405 "not allowed.";
}
#去缓存中查找
return (lookup);
}
Std.log ("log_debug:url=" + req.url);
Set req.backend = Webgroup;
if (req.request! = "GET" && req.request! = "HEAD" && req.request! = "PUT" && req.request! = "POST" && req.request! = "TRACE" && req.request! = "OPTIONS" && req.request! = "DELETE") {
/* non-rfc2616 or CONNECT which is weird. */
return (pipe);
}
# cache only GET and HEAD requests
if (req.request! = "GET" && req.request! = "HEAD") {
Std.log ("LOG_DEBUG:req.request not get! "+ req.request);
return (pass);
}
# HTTP Authentication page also pass
if (req.http.Authorization) {
Std.log ("Log_debug:req is authorization!");
return (pass);
}
if (req.http.cache-control ~ "No-cache") {
Std.log ("Log_debug:req is No-cache");
return (pass);
}
# Ignore Admin, verify, servlet directory, read content directly from back-end server with. JSP and. Do endings and URLs with?
if (req.url ~ "^/admin" | | req.url ~ "^/verify/" | | req.url ~ "^/servlet/" | | req.url ~ "\. ( Jsp|php|do) ($|\) ") {
Std.log ("url is admin or servlet or jsp|php|do, pass.");
return (pass);
}
# cache only requests with the specified extension and remove cookies
if (req.url ~ "^/[^?") +\. (jpeg|jpg|png|gif|bmp|tif|tiff|ico|wmf|js|css|ejs|swf|txt|zip|exe|html|htm) (\?. *|) $") {
Std.log ("* * * URL is jpeg|jpg|png|gif|ico|js|css|txt|zip|exe|html|htm set cached! ***");
Unset Req.http.cookie;
# Canonical request header, accept-encoding only the necessary content
if (req.http.accept-encoding) {
if (req.url ~ "\. ( jpg|png|gif|jpeg) (\?. *|) $") {
Remove req.http.accept-encoding;
} elsif (req.http.accept-encoding ~ "gzip") {
Set req.http.accept-encoding = "gzip";
} elsif (req.http.accept-encoding ~ "deflate") {
Set req.http.accept-encoding = "deflate";
} else {
Remove req.http.accept-encoding;
}
}
return (lookup);
} else {
Std.log ("URL is not cached!");
return (pass);
}
}
Sub Vcl_hit {
if (req.request = = "PURGE") {
set obj.ttl = 0s;
Error "purged.";
}
return (deliver);
}
Sub Vcl_miss {
Std.log ("################# cache Miss ################### url=" + req.url);
if (req.request = = "PURGE") {
Purge
Error "purged.";
}
}
Sub Vcl_fetch {
# If the backend server returns an error, enter Saintmode
if (beresp.status = = | | beresp.status = = 501 | | beresp.status = = 502 | | beresp.status = 503 | | beresp.status = = 504) {
Std.log ("Beresp.status error!!! beresp.status= "+ beresp.status);
Set req.http.host = "status";
Set beresp.saintmode = 20s;
return (restart);
}
# If the backend is still cached, skip
if (beresp.http.Pragma ~ "No-cache" | | Beresp.http.cache-control ~ "No-cache" | | Beresp.http.cache-control ~ "Private") {
Std.log ("not allow cached! Beresp.http.cache-control= "+ Beresp.http.cache-control);
return (Hit_for_pass);
}
if (beresp.ttl <= 0s | | beresp.http.set-cookie | | beresp.http.Vary = = "*") {
/* Mark as "Hit-for-pass" for the next 2 minutes */
Set Beresp.ttl = + S;
return (Hit_for_pass);
}
if (req.request = = "GET" && req.url ~ "\. ( css|js|ejs|html|htm) ($ ") {
Std.log ("gzip is enable.");
Set beresp.do_gzip = true;
Set Beresp.ttl = 20s;
}
if (req.request = = "GET" && req.url ~ "^/[^?") +\. (Jpeg|jpg|png|gif|bmp|tif|tiff|ico|wmf|js|css|ejs|swf|txt|zip|exe) (\?. *|) $") {
Std.log ("url css|js|gif|jpg|jpeg|bmp|png|tiff|tif|ico|swf|exe|zip|bmp|wmf is cache 5m!");
Set beresp.ttl = 5m;
} elseif (Req.request = = "GET" && req.url ~ "\. ( html|htm) ($ ") {
Set Beresp.ttl = 30s;
} else {
return (Hit_for_pass);
}
# If the backend is unhealthy, return the cached data first 1 minutes
if (!req.backend.healthy) {
Std.log ("Eq.backend not healthy! Req.grace = 1m ");
Set req.grace = 1m;
} else {
Set req.grace = 30s;
}
return (deliver);
}
# Send to Client
Sub Vcl_deliver {
if (Obj.hits > 0) {
Set Resp.http.x-cache = "has Cache";
} else {
#set Resp.http.x-cache = "no Cache";
}
return (deliver);
}
VIII. Administrative Orders
Follow varnish will install some convenient debugging tools, use these tools, for your better application varnish have a lot of help.
VARNISHNCSA (Displays the log in NCSA format)
With this command, you can display the user's access log as if it were a nginx/apache.
Varnishlog (varnish verbose log)
You can use this command if you want to track the detailed handling of varnish processing each request.
The direct use of this command, the content of the display is very much, usually we can pass some parameters, so that it only shows what we care about.
-b \ \ \ Only displays logs between varnish and backend server, which you can use when you want to optimize the hit rate.
C \ \ is similar to-B, but it represents varnish and client-side communication.
-I tag \ \ Only shows a tag, such as "Varnishlog–i Sessionopen" will only display new sessions, note that this place tag name is not case-sensitive.
-i \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ "Varnishlog-c-I rxheader-i cookies" will display all header information received from the client for the containing cookie word.
-o \ \ Aggregate Log request ID
For example:
Varnishlog-c-o/auth/login This command will tell you all requests from the client (-C) that contain the "/auth/login" field (-O).
Varnishlog-c-O reqstart 192.168.1.1 Tracks only a single client request
Varnishtop
You can use Varnishtop to determine which URLs are often propagated to the backend.
The appropriate filtering uses the –i,-i,-x and-X options, which can display the requested content according to your requirements, the customer
Information in other logs, such as the user's browser.
Varnishtop-i rxurl \ \ You can see the number of URLs requested by the client.
Varnishtop-i txurl \ \ You can see the number of URLs requesting the backend server.
Varnishtop-i rxheader-i accept-encoding \ \ Can see how many times the received header information
Contains accept-encoding.
Varnishstat
Displays an associated statistic that runs a varnishd instance.
Varnish contains many counters, request loss rate, hit ratio, store information, create threads, delete objects, etc., almost all operations. By tracking These counters, you can get a good understanding of the varnish running state.
Varnishadm
Control the varnish server from the command line. You can delete the cache dynamically, reload the configuration file, and so on.
There are two ways to link a management port:
1,telnet mode, you can connect to the management port via Telnet. For example: "Telnet localhost 6082"
2,varnishadm mode, you can pass commands through Varnish's own hypervisor. such as: Varnishadm-n vcache-t localhost:6082 Help
Dynamic Purge Cache
Varnishadm-s/etc/varnish/secret-t 127.0.0.1:6082 Ban.url/2011111.png
Where: Ban.url after the path must not take abc.xxx.com domain name, or the like, otherwise the cache can not be erased.
Clear the URL address that contains a subdirectory:
/usr/local/varnish/bin/varnishadm-s/etc/varnish/secret-t 127.0.0.1:6082 url.purge/a/
Do not restart the load configuration file
Login to Admin interface
/usr/local/varnish/bin/varnishadm-s/etc/varnish/secret-t 127.0.0.1:6082
Load configuration file
Vcl.load NEW.VCL/ETC/VARNISH/DEFAULT.VCL
Compile error will be prompted, success will return 200
Load new configuration file
Vcl.use NEW.VCL
The new configuration file is now in effect!
Varnish Installation and Tuning notes