Varnish introduction and practical Configuration

Source: Internet
Author: User
Tags varnish

If the following configuration commands are unclear, refer to the translated VCL manual (see!

 

1. What is varnish?

1. varnish is a reverse proxy server and a web acceleration cache server. The cached objects are stored in the form of key/value. Generally, a key generally refers to a URL, the value is the specific resource (or object) of the URL question ).

 

2. the varnish configuration file is configured by a dedicated VCL (varnish configuration language. This language must first be converted to C code and then executed.

 

3. varnish supports high-performance I/O Event Notification models such as epoll and kqueue.

 

4. varnish logs are included in the shared memory logs applied for by varnish. The default value is 80 Mb,

 

2. varnish principles

1. After varnish is started, it contains two processes: the management process and the child process.

2. The management process is mainly used to manage configuration files, monitor sub-processes, and initialize varnish. It also provides a command line management interface.

 

3. The child process consists of several threads of different types. Typical examples include:

● Acceptor thread: used to receive new connections

● Worker thread: The worker thread. A thread corresponds to a connection session.

● Expiry thread: clears expired thread processes.

 

3. cache storage types supported by varnish:

● File: stores all cached data using a specific file, and maps the data of the entire file to the process space through the mmap () System Call. However, after you restart or stop varnish, the cached content disappears, that is, it cannot be stored permanently.

● Malloc: Use the malloc () system to request a fixed memory size to cache data when varnish is started.

● Persistent: it is still in the testing period and is not recommended.

Note: The storage type is specified using the-s parameter.

 

 

 

4. installation and configuration

Environment: CentOS6.5. iptables and selinux are disabled.

Package: varnish-3.0.5-1.el6.x86_64.rpm

Varnish-docs-3.0.5-1.el6.x86_64.rpm

Varnish-libs-3.0.5-1.el6.x86_64.rpm

:

Http://repo.varnish-cache.org/RedHat/varnish-3.0/el6/x86_64/varnish/

 

Front-end: varnish server, two NICs:

Eth0: 172.16.0.11

Eth1: 192.168.0.11

Backend: Two apache servers with one NIC:

Web1: 192.168.0.12

Web2: 192.168.0.13

Gateway: 192.168.0.11

 

Configuration file:

Directory:/etc/varnish

File: default. vcl

Secret # shared key file used by the Management Interface

Script:/etc/rc. d/init. d/varnishd

Script configuration file:/etc/sysconfig/varnish # used to specify configuration parameters

-A <[hostname]: port>: Specifies the address and port of the varnish service listener.

-F <filename>: Specifies the VCL file path.

-P <parameter = value>: used to adjust the parameter value.

-S <secretfile>: Specifies the Authentication Encryption File of the command line management interface.

-T

-S <storagetype, options>: Specifies the storage type, location, and size of the cached object.

 

 

Algorithms supported by Server Load balancer:

● Round-robin: no additional parameters

 

● Random: contains two parameters:

1. weight: indicates that the higher the weight, the more requests the corresponding backend server receives. You can set each backend server.

2. retries: defines the number of times a backend health server is queried. Sets the entire ctor.

● Hash: It is a special variant of random. It uses the hash data in the cache for scheduling, that is, requests with the same URL will be sent to the same web server. This scheduling method is very suitable for backend servers or cache servers.

 

● Client: it is also a special variant of random. It uses client. identity for scheduling. The default value of this variable is the client IP address, but the value can be changed in VCL. Using this algorithm, if the client. identity values of all requests are the same, requests from the Unified client will be sent to the same backend server.

 

● Dns: it searches for and matches in a given backend server list based on the Host header information requested by the client. (This algorithm is rarely used)

 

 

Feature: first defined and then called. The order is different. An error is returned if you do not call only the definition. For example, after defining director, you must call director later. Otherwise, a compilation error occurs.

 

 

1. Change the listener interface of varnish to 80.

Vim/etc/sysconfig/varnish

VARNISH_LISTEN_PORT = 80

 

2. Define backend servers

Backend web1 {

. Host = "192.168.0.12 ";

. Port = "80 ";

}

Backend web2 {

. Host = "192.168.0.13 ";

. Port = "80 ";

}

 

3. Define the director (Scheduler) of Server Load balancer)

Director webs round-robbin {

{. Backend = web1 ;}

{. Backend = web2 ;}

}

 

 

4. Call the director defined above

Sub vcl_recv {

Set req. backend = webs;

Return (pass); # change the default return (lookup) configuration to return (pass) during the test. Otherwise, the subsequent requests will go directly to the cache, the test results are not displayed.

}

 

5. Define the backend health check, which must be referenced in the backend section and above the backend section.

Probe healthchk {

. Url = "/";

. Interval = 3 s;

. Timeout = 10 ms

. Window = 3;

. Threshold = 2;

. Initial = 3;

. Expected_response = 200;

}

Backend web1 {

. Probe = healthchk;

}

Backend web2 {

. Probe = hearlthchk;

}

 

6. Define static/dynamic separation. Assume that web1 provides dynamic content and web2 provides static memory, but comment out director first.

Sub vcl_recv {

If (req. url ~ "(? I) \. php $ "){

Set req. backend = web1;

} Else {

Set req. backend = web2;

}

}

Sub vcl_fetch {

Set beresp. http. From-Backend = beresp. backend. ip; # It is used to test whether the dynamic and static requests come From the corresponding server.

}

 

7. Define anti-leeching Functions

Sub vcl_recv {

If (req. http. referer ~ "Http ://.*"){

If (! (Req. http. referer ~ "Http: //. * \. google \. com" |

Req. http. referer ~ "Http: //. * \. baidu \. com" |

Req. http. referer ~ "Http: //. * \. myselfsite \. com "))

{

Set req. http. host = "www.myselfsite.com ";

Set req. url = "/login/login.html ";

}

}

}

 

8. How do I know whether the cache hits and the number of hits?

Sub vcl_deliver {

If (obj. hits> 0 ){

Set resp. http. X-Varnish-Cache = "HIT ";

Set resp. http. X-Cache-Hits = obj. hits;

} Else {

Set resp. http. X-Varnish-Cache = "MISS ";

}

}

 

9. removes the Cookie information contained in the static file request to provide the cache hit rate.

Sub vcl_recv {

If (req. url ~ "(? I) \. (html | htm | jpg | gif | ico | jpeg | png | js | css | swf )(\? [A-z0-9] + )? $ "){

Unset req. http. Cookie;

}

}

Sub vcl_fetch {

If (req. url ~ "(? I) \. (html | htm | jpg | gif | ico | jpeg | png | js | css | swf )(\? [A-z0-9] + )? $ "){

Unset beres. backend. set-cookie;

}

}

 

10. manually clear expired Cache

Acl purgers {

"127.0.0.1 ";

"192.168.0.0"/24;

}

Sub vcl_recv {

If (req. request = "PURGE "){

If (! Client. ip ~ Purgers ){

Error 405 "Method not allowed ";

}

Return (lookup );

}

}

Sub vcl_hit {

If (req. request = "PURGE "){

Purge;

Error 200 "Purged ";

}

}

Sub vcl_miss {

If (req. request = "PURGE "){

Purge;

Error 404 "Not in cache ";

}

}

Sub vcl_pass {

If (req. request = "PURGE "){

Error 502 "PURGE on a passed object ";

}

}

 

Varnish details: click here
Varnish: click here

Structural notes for Varnish Cache

Install and configure Varnish-5.8 in CentOS 2.1.5

The RedHat script uses the CentOS source to update and install Nginx, PHP 5.3, and Varnish.

Using Varnish to build Cache Server notes

Install and configure the cache service Varnish

Preparations for Varnish compilation and Installation

Configuration Optimization of Varnish cache in Linux

Varnish authoritative guide (Chinese) PDF

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.