Cache server memcached and varnish

Source: Internet
Author: User
Tags delete key memcached server port varnish

Two types of cache servers:
1. Proxy cache servers; proxy-like cache server;
2. Side-hung cache servers; bypass cache server;

Cache servers can also be categorized according to cached data content:
1. Data cache;
2. Page cache;
The data has access to the hot zone, which refers to the data that is accessed frequently.

The essence of the cache is: Use space to change time;
Local characteristics of the cache: spatial locality and temporal locality:

The timeliness of the cache:
Expired cleanup: PURGE, trim;
Non-expiring but overflow cleanup: LRU (minimum use principle);
Not expired but modified: PURGE, trim;

Cache Hit Ratio: hits/(Hits + Miss), the range of values is (0,1), usually also expressed as a percentage;
Page hit rate: A standard for statistical measurement based on the number of hit pages;
Byte hit ratio: a standard for statistical measurement based on the volume of a hit page; (more precise)

Data cache or not:
1. Private data, which is placed in the private cache;
The header of the http/1.1 version is provided with a Cach-control header, the value of the Cache-control header contains the private typeface, or a cookie or Cookie2 header is provided, and most of this data is cached by the browser;
2. Common data: Public information, can be placed in the public cache, can also be placed in the private cache;
Most of this data is cached on an anti-generational server or on a dedicated cache server;
A small part can also be slow to exist on the browser;

Attention:
The browser's cache (typically a private cache):
Only for individual users to take effect, so that users can repeatedly access a resource to speed up access;
Cache server cache (typically public cache):
Speed up resource access for front-end or reverse-generation servers, and reduce the access pressure of backend resource servers;
For cache servers, the process steps for caching (roughly successful processing steps):
Receive requests--parse requests (extract URLs and other related header information from the HTTP protocol header of the request message)--query cache--cache data freshness detection--build response messages--Send response messages--logging

The validity judgment mechanism of cached data:
1. Expiry time:
http/1.0:
Expires, absolute time;
http/1.1:
1.Expires
2.cache-control (header): max-age=86400
3.CACHE-CONTROL:S-MAXAGE=10000 (s stands for public cache)
2. Conditional Request:
Last-modified/if-modified-since: Conditional Request for response message/request message
Etag/if-none-match: Extending the label, hashing the label; If-none-match: If it does not match, returning 200 means the mismatch

Example: View Baidu's logo resources:
Cache-control maximum effective time for max-age=2592000//cache Resources
ETag "593645fd-664"//Get the 16 binary hash value
Expires Wed, Jul 2018 03:43:28 GMT//expiry time
Last-modified Tue, June 06:04:45 GMT//From June to 06:04:45 GMT this time the resource has not changed (GMT: GMT)

One, memcached cache server:
Belongs to the side-hanging cache server, also belongs to the data cache;

memcached storage Format:
1. Store all data in memory in the form of key-value pairs;
2. No persistent storage function;
3. Usually used to cache small files, small data, single cache data content limit is 1MB;

Memcached was developed by Brad Fitzpatrick, a subsidiary of LiveJournal company Danga Interactive Research and development;

Memcached Features:
1.k/v cache; Serializable data, flattened data;//Common JSON, also serialized data
2. Storage: Key (key), value (value), flag (16-bit binary number, 0-65535), Expire_time (Time-out), Length (save data) and other simple parts;
3. Side-hanging cache, the function realizes half relies on the client, the other half depends on memcached server;
4.O (1) of the search and execution efficiency;
5. The distributed cache system can be built, and the distributed cluster system is not communicated between each server.
6. How to handle the cached data (more special):
1) Cache space exhaustion: cleanup of outdated data based on LRU (least recently used algorithm);
2) cache entry expiration: Lazy cleanup mechanism---used to minimize memory fragmentation
7. Memory allocation Method:
Slab allocation: According to the page to break up, will break up the page into a number of smaller pages, the maximum 1M
Facter: Growth factor; default 1.25 times times increment; small pages allocate space based on growth factors, leaving a maximum of 1M space.

Installation configuration memcached:
1.base source has the corresponding RPM package, you can directly use Yum mode to install content;
~]# Yum Install memcached-y
2. Download the source code package from memcached.org official site, compile and install;

Memcached Cache Service Listening port: 11211/tcp, 11211/udp;memcached cache service after installation of the default environment: Main program:/usr/bin/memcached Environment Profile:/etc/sysconfig/me         mcached ~]# cat/etc/sysconfig/memcached port= "11211"//Listening port number user= "memcached"//program belongs to which user maxconn= "1024"//service maximum number of connections cachesize= "64"//Cache size defaults to "options=" "//other options default to The protocol used by the empty memcached cache service is the Memcached protocol, and the data transfer format supported by the Memcached protocol is: 1. file format 2. Interactive operation in binary format (~]# telnet+ contains the IP address of the memcached service Host +        Port number (default is 11211), use telnet command to enter interactive operation mode): command: 1. Statistics class: stats, stats items (cache entry view), stats slabs (view of memory space), stats sizes 2. Management class: Set (Modify the value of a particular key), add (add a new key), replace (replace the value of the original key), append (append a value after the original key value), prepend (append a value before the original key value), delete (delete key), INCR (add value to key), DECR (key delete value), Flush_all (Clear All keys) 3. View class: Get memcached General format for interactive commands: <command> Key_name [Flags         ] [expire_time] [length_bytes] \ key_name: Key name [flags]: Flag bit, range is 0-65535 [Expire_time]: Key Max cache time (seconds)            [Length_bytes]: Character Length example:Add a new key: add TestKey 0 Hello World STORED View New key:                Get TestKey VALUE testkey 0 Each Hello World END adds characters after the new key and views: Added: Append testkey 0 2 hi STORED                    Look: Get TestKey VALUE testkey 0 Hello Worldhi                     END adds a character before the new key and views: add: prepend testkey 0 3 hi                    STORED view: Get TestKey VALUE TestKey 0 14 hi Hello World endmemcached command options:-C num: Number of named concurrent connections, default is 1024-m num: Allocate how much memory space, default is 64m-u user Name: Specify which user to start the memcached service process-P port: Specify the port number, default is 11211-u port: Specify the port number of the UDP protocol, the default is 11211,0 is off-l IPAddress: which address to listen to, by default is the Have-F Facter: Indicates the growth factor, the default1.25 times times-T thread_num: Sets the number of threads, by default 4 threads-V,-VV,-VVV: The level of detail displayed 

Second, varnish cache server:
The varnish cache server is one of the page cache servers used to provide query results for client requests as well as proxy cache servers.

Varnish Cache Server Features:
What the varnish cache server can do: Reverse proxy + cache server
Varnish cache server consumes 128M of memory in memory to cache service by default
Implementation of the cache server:
Open Source Solutions:
Squid: Appeared in the middle of the 90 's; Low utilization rate
Varnish: Reverse Proxy cache: High Utilization cache server implementation scheme;
https://www.varnish-software.com///Commercial version Service site
http://varnish-cache.org///Open Source site

Varnish the organizational structure of the program:
1.Management Process:
1) The main process of varnish;
2) provides the access interface of the command line tool, which is used to manage the child processes and initialize the cache;
2.child/cache process: Has a variety of threading models, as follows:
Command line
Storage/hashing
Log/stats
Accept
Worker threads
Object expiry
Backend Communication
...
3.VCL compiler:c Compiler (GCC)

Varnish's thread pool function:
Statistical data: data statistics and counters;
Log area: log data;

How VCL files are handled:
VCL compiler (VCL compiler Completed)--c compiler (then compiled in C compiler)--shared object

Overloaded tools for VCL configuration files:
/usr/sbin/varnish_reload_vcl

Vcl:varnish configuration language,varnish config language;
In a file with the suffix named ". Vcl", a file named "DEFAULT.VCL" is automatically created in the configuration home directory after the initial installation of varnish, but its contents are almost empty;
Installation of varnish:
A stable version of the varnish is provided directly in the Epel source.
Install command: Yum install-y varnish (requires first configuration of Epel source)

Varnish The program Environment:
1. Configuration file:
/etc/varnish/default.vcl
Defines the default configuration file for the cache policy, mainly used to configure the working properties of each child/cache thread;
/etc/varnish/varnish.params
Define runtime parameters and operating characteristics of the varnish cache service;
/etc/varnish/secret
Pre-shared key for security purposes when using the VARNISHADM command-line administration tool;

  2. Main program:/usr/sbin/varnishd 1) CLI (Simple command line Interface management tool):/usr/bin/varnishadm 2) Share Memory Log Interactive tool:/usr/bin/varn Ishhist/usr/bin/varnishlog/usr/bin/varnishncsa/usr/bin/varnishstat/usr/bin/varnishtop Open Varnish Logging (not enabled by default): 1./usr/bin/varnishlog 2./USR/BIN/VARNISHNCSA (recommended when logging is required because it is closer to Apache Co Mmand command form) 3) Cache Test Tool:/USR/BIN/VARNISHTEST4) varnish Unit FIle:/usr/lib/systemd/system/ Varnish.service: Manage the management process of varnish; service file for log persistence storage;/usr/lib/systemd/system/varnishlog.service/usr/l Ib/systemd/system/varnishncsa.service  

Main program:/usr/sbin/varnishd
varnishd-http Accelerator daemon--http Accelerator daemon
Common options:
-a address[:p ort][,address [:p ORT] [...] The
specifies the address and service port to be used to receive client requests, the default value is 127.0.0.1:6081
-b host[:p ort]: Add a backend sever host and port number
-T address[:p ORT] : Address and port number to be used when implementing management,
-F Config:
Indicates the VCL configuration file that needs to be compiled and loaded, default is/ETC/VARNISH/DEFAULT.VCL;
-P Param=value
Set run-time parameters and parameter values for the varnish threading model,
Most run-time parameters have default values, and do not easily attempt to adjust the values of run-time parameters;
Use this option in the command-line tool to adjust the values of run-time parameters only when it takes effect;
-R Param[,param ...] The
sets the run-time parameter to read-only property, which means that its value cannot be modified when it is run again;
-S [name=]type[,options]: Specifies the storage mechanism of the cache
defines how the cached data is stored, there are currently three storage mechanisms:
1. Malloc[,size]: The best performance, but all the data stored in memory;
2.file[,path[,size[,granularity]]: Just a black box file, the content can not be observed, the key value pairs are stored in memory, the hash value is stored on the disk, Guaranteed fast retrieval without too much memory space
3.persistent,path,size (experimental, Experience Beta): persistent storage (mechanism immature)

        -T address[:port]            在指定的IP地址和端口号上提供管理接口;默认为127.0.0.1:6082;        -u user、-g group            指定允许varnish服务进程的系统用户和系统组,默认均为varnish;运行时参数的配置文件:/etc/varnish/varnish.params    DAEMON_OPTS="-p param1=value1 -p param2=value2 ..."    thread_pool_min=5    thread_pool_max=500    thread_pool_timeout=300

Command-line Administration Tools: Varnishadm (interactive mode)
Varnishadm-control a running Varnish instance
Common options:
-S Secret_file
Specifies the pre-shared key file that is validated by the command-line tool;
-T Address:port
Specify the management address and port;

  commands for interactive mode: Help [<command>]: varnishadm command helps Ping [<timestamp>]: Connectivity Test auth <response> : Do authentication quit: Exit interactive mode banner: Do banner status: State output start: Start the varnish service (not recommended)//restart, shutdown, turn on the cache will be emptied, this is a fatal stop : Turn off the varnish service (not recommended) manage Vcl:vcl.load <configname> <filename>: Mount and compile vcl files Vcl.inline <config Name> <quoted_vclstring>: Vcl.use <configname>: Activates the already compiled VCL file; Vcl.discard <configname>: Deletes the VCL configuration file that has already been compiled, the VCL.LIST:VCL configuration file; param.show [-l] [<param>]: Enumerates all runtime parameters and outputs detailed parameters; Param.set &    Lt;param> <value>: Temporarily modifies the value of a run-time parameter, modifies it once, restores the default panic.show after a reboot: Displays a list of run-time parameters or displays details of the specified run-time parameters; Panic.clear: Clears the runtime parameter list; Storage.list: Displays cache storage related information; vcl.show [-v] <configname>: Displays a list of DEFAULT.VCL all configuration history changes, including the use and usage of Backend.list [<backend_expression>]: Backend server list display; Backend.set_health <backend_expression> <state> ; : Set the backend server to the health status code;  

Common commands:
VCL configuration file Management Related:
Vcl.load: Loading and compiling VCL configuration files;
Vcl.use: Activates the VCL configuration file that has been compiled;
Vcl.discard: Delete vcl configuration file in inactive state;

运行时参数相关:    param.show:显示运行时参数列表或显示指定的运行时参数的详细信息;    param.set:实时设置或修改运行时参数的值;(不要轻易尝试修改,一定要清楚操作的目的)缓存存储相关:    storage.list:显示缓存存储相关信息;后端服务器相关:    backend.list:显示可以直接访问的后端服务器列表;    backend.set_health:指明后端服务器的健康状态检测方式;

Vcl:varnish Configuration Language;
The configuration language of the "domain" proprietary type;

VCL中定义了多个状态引擎,各状态引擎之间存在相关性,同时又彼此隔离;每个状态引擎内都可以使用return()关键字指明各状态引擎之间的关联关系;在VCL文件中,状态引擎使用如下的方式进行定义:    sub STATE_ENGINE_NAME { ...; }    sub+状态引擎的名字+{ 配置指令1;配置指令2;... }

Three varnish status engine version comparison and data processing flow:
1.varnish 2.0-The state engine includes:
VCL_RECV: Receives the request and determines whether the cached data can be used;
Vcl_hash: For data retrieved from the backend end, hash is computed if it can be cached;
Vcl_hit: The requested resource can be obtained directly from the cached object;
Vcl_miss: The requested resource cannot be obtained directly from the cached object;
Vcl_deliver: construct response message;
Vcl_fetch: Retrieving data from the backend end;

    varnish2.0 process: 1.VCL_RECV---Vcl_hash---Vcl_hit Vcl_deliver 2.VCL_RECV  --Vcl_miss------vcl_fetch----vcl_deliver 3.vcl_recv---Vcl_fetch--vcl_deliver (not going cache) 2.varnish    3.0 The state engine consists of: VCL_RECV: Receives the request and determines whether the cached data can be used; Vcl_hash: For data retrieved from the backend, if it can be cached, the hash is computed; Vcl_hit: The requested resource can be obtained directly from the cached object; Vcl_miss: The requested resource cannot be obtained directly from the cached object; Vcl_deliver: Building the response message; Vcl_fetch: Retrieving the data from the backend side; Varnish2.0 added Vcl_pass: for the original    You can find from the cache that the resource-specific settings do not use caching, most of the lookup strategies for private caches, Vcl_pipe: for incomprehensible requests, directly to the backend side of the package as it is, and Vcl_error: When an error occurs locally, build the error response message; Varnish process: 1.VCL_RECV---Vcl_hash---Vcl_hit Vcl_deliver 2.VCL_RECV Gt Vcl_hit--Vcl_pass--Vcl_fetch-->vcl_deliver 3.vcl_recv--vcl_hash--Vcl_miss TCH---vcl_deliver 4.vcl_recv---Vcl_hash----Vcl_miss--and Vcl_pass--Vcl_fetch ER 5.vcl_recv ----vcl_pass-------------6.VCL_RECV Vcl_pipe3.varnish State engine includes: 4.0+: Receive Request, and determine whether the cached data can be used; Vcl_hash: For data retrieved from the backend side, if it can be cached, the hash is computed; Vcl_hit: The requested resource can be obtained directly from the cached object; Vcl_miss: The requested resource cannot be cached on the object Vcl_deliver: Build a response message; Vcl_pass: Do not use caching for settings that could have been explicitly found from the cache, most of the lookup strategies for private caches, Vcl_pipe: Direct proxy to BAC for incomprehensible requests, directly as-is Kend end; added: Vcl_backend_fetch: Retrieving data from backend, Vcl_backend_response: processing data retrieved from backend; vcl_ Backend_error: Build error Response message when local error occurs, Vcl_synth: Mainly used for cache pruning and message sending, (from error in varnish3.0) Vcl_purge: pruning cache; varnish  Process Flow: 1.VCL_RECV---Vcl_hash---Vcl_hit vcl_deliver 2.vcl_recv---Vcl_hash  --Vcl_pass---Vcl_backend_fetch---Vcl_backend_response vcl_deliver 3.vcl_recv--Vcl_hash  --Vcl_pass---Vcl_backend_fetch---Vcl_backend_response vcl_deliver 4.vcl_recv--Vcl_hash --vcl_miss[ Vcl_pass]--vcl_backend_fetch---Vcl_backend_response, Vcl_deliver 5.vcl_recv--Vcl_hash--&G T Vcl_purge--Vcl_synth 6.vcl_recv---Vcl_hash-----vcl_pipe 7.VCL_RECV--vcl_hash--wait                                                                 Ing↑↓ There are two special state engines in ←←←←←varnish: Vcl_init: This engine is called before any request is processed, the master To be used to initialize the Vmods; Vcl_fini: When all requests have been processed, this engine is called when the VCL configuration is discarded and is mainly used to clean up the vmods;

The syntax format for VCL:
1. Similar to the C, C + + language syntax, you can support C, C + +, or perl-style annotation methods:
//, #, / ... /
2.VCL does not support loops and jump statements, but can be if conditional branching (single, dual, and multi-branch);
Single-branch format for if:
if (condition) {
...;
}
If the two-branch format:
if (condition) {
...;
} else {
...;
}
Multi-branch format for if:
if (condition1) {
...;
} elseif (Condition2) {
...;
} else {
...;
}
3. All string contents should be placed within "" and cannot use newline characters;
4. Support for built-in and custom ACLs;
5. Support rich operator: = (Assignment symbol), = = (comparison of equivalence), >=, <, <=,!, && (with operations), | | (or operation), ~
6. Use the return () keyword to determine which engine is used by the next processing process, and return () keyword has no returned value;
7. Support built-in variables and custom variables, and there is a dependency between the built-in variables, can only invoke specific built-in variables in a particular engine;
Set Var_name = value
Unset var_name

Built-in variables in VCL:
Req.: Request, which represents the variable that the client sends the requested message about;
Bereq.
: Backend request, the relevant variable that is sent by varnish to the backend-side host;
Resp.: Response, the relevant variable of the response message from the varnish response to the front-end device or client;
Beresp.
: Backend response, the correlation variable of response message from backend host to varnish;
Obj.*: There are variables related to cache objects in cache space, read-only;

Common built-in variables: Req class: Req.http.http_header_name:req.http.cache-control req.http.user-agent                Req.http.Host: Req.url: URL-related information in the request message sent by the client, if (req.url ~ (? i) \.php$) {            return (pass); } Req.method: The method of requesting resources in the request message sent by the client; Req.proto: The HTTP protocol version used in the request message sent by the client; Bereq class: Bereq.http.HTTP_HEAD Er_name:bereq.http.cache-control bereq.http.user-agent Bereq.http.Host:bereq.url : Varnish The URL-related information in the request message sent to the backend end, bereq.method:varnish the method of requesting the resource in the request message sent to the backend side; Bereq.proto:varnish to Backen The version of the HTTP protocol used in the request message sent by the D-terminal; Bereq.backend: Indicates the backend host to be dispatched; RESP class: Resp.http.HTTP_HEADER_NAME:resp.status:va Rnish The response status code in the response message sent to the client itself; The HTTP protocol version number in the response message sent to the client resp.proto:varnish itself; Beresp class: Beresp.http.HTTP_HEADER_N The response status code in the response message sent to varnish by the AME:beresp.status:backend end host; The HTTP protocol version number in the response message sent to varnish by the beresp.proto:backend-side hostThe host name in the response message sent by the Beresp.backend.name:backend-side host-type varnish; the amount of cache time remaining for the content of the beresp.ttl:backend-side host response; obj class: OB J.hits: The number of times the specified object was hit from the cache, (real-time numeric) Obj.miss: The number of times the specified object was missed from the cache, and (real-time value) Obj.ttl: The specified object has a valid cache duration in the cache;

Custom Variables for VCL:
Define variables:
Set var_name = VALUE;
To cancel a defined variable:
Unset var_name;
To facilitate the presentation of the following configuration example, say the environment:
1. On the installed Varnish cache server host,/etc/varnish/varnish.params the backend server port number to modify the listener is 80 instead of the default 6081, the other default can be, easy to display;
2. varnish_listen_port=80 in/etc/varnish/varnish.params, then save exit restart Service (restart command: Systemctl reload Varnish.service)
3.iptables-f shut down the firewall;
4.setenforce 0 Change the status of SELinux to prevent access being affected;

Cache server memcached and varnish

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.