Some days ago to help others to optimize the PHP program, get disgraced, and finally opened the fastcgi Cache, is a reluctant to deal with the past it. However, fastcgi cache does not support distributed caching, and when the server is a lot of time, the waste of redundancy will be very serious, in addition to the data consistency problem, so it is only a thick solution.
For this type of problem, Srcache is a fine-grained solution. It works roughly as follows:
Srcache Working principle
When the problem is relatively simple, srcache and MEMC modules are usually used together. Online can search for some relevant examples, we can refer to, here will not repeat. When the problem is more complex, such as the dynamic calculation of cache keys, and so on, you have to write a bit of code, the LUA module is the best choice.
Gossip not much, table one table nginx configuration file long what kind:
Lua_shared_dict phoenix_status 100m;lua_package_path '/path/to/phoenix/include/? Lua;/path/to/phoenix/vendor/?. lua;; '; Init_by_lua_file/path/to/phoenix/config.lua;server {Listen 80; server_name foo.com; Root/path/to/root; Index index.html index.htm index.php; Location/{try_files $uri $uri//index.php$is_args$args; } location ~ \.php$ {Set $phoenix _key ""; Set $phoenix _fetch_skip 1; Set $phoenix _store_skip 1; Rewrite_by_lua_file/path/to/phoenix/monitor.lua; Srcache_fetch_skip $phoenix _fetch_skip; Srcache_store_skip $phoenix _store_skip; Srcache_fetch get/phoenix/content key= $phoenix _key; Srcache_store put/phoenix/content key= $phoenix _key; Add_header x-srcache-fetch-status $srcache _fetch_status; Add_header x-srcache-store-status $srcache _store_status; Try_files $uri = 404; Include fastcgi.conf; Fastcgi_pass 127.0.0.1:9000; Fastcgi_intercept_errors on; Error_page 502 503 504 =/phoenix/failover; } location =/phoenix/content {internal; Content_by_lua_file/path/to/phoenix/content.lua; } location =/phoenix/failover {internal; Rewrite_by_lua_file/path/to/phoenix/failover.lua; }}
After the Nginx is started, the configuration information in the Config.lua is loaded. When the request arrives, by default, the Srcache is off, and in Monitor.lua, the current request is matched in a regular match, and once the match is successful, the cache key is computed, the Srcache is set to ON, and the Content.lua is finished and read and written.
Look at the contents of the "config.lua" file, which is used primarily to record some global configuration information:
Phoenix = {}phoenix["memcached"] = { default = { Timeout = "", keepalive = {idle = 10000, size = +}, }, {host = "127.0.0.1", port = "11211"}, {host = "127.0.0.1", port = "11212"}, {host = "127.0.0.1", port = "11 213 "},}phoenix[" rule "] = { default = { expire = $, min_uses = 0, max_errors = 0, query = { [" Debug "] = False, }, }, { regex =" ^/foo/bar ", query = { [" page "] = function (v) if v = = "" or V = = Nil then return 1 end return Tonumber (v) or false end, ["limit"] = True, },
},}
Look at the contents of the "monitor.lua" file, which is primarily used to compute the cache key and open the Srcache module:
Local status = Require "status" local status = Status:new (ngx.shared.phoenix_status) Local Request_uri_without_args = Ngx.re.sub (Ngx.var.request_uri, "\ \?". * "," ") Table.unpack = Table.unpack or unpackfor index, rule in ipairs (phoenix[" rule ")) do if type (rule[" regex "]) = =" St Ring "then rule[" regex "] = {rule[" regex "]," "} End local regex, options = Table.unpack (rule[" regex "]) if n Gx.re.match (Request_uri_without_args, Regex, options) then local default = phoenix["rule" ["Default"] Local expire = rule["expire"] or default["expire"] local min_uses = rule["min_uses"] or default["min_uses"] Local max_errors = rule["Max_errors"] or default["max_errors"] local key = {Ngx.var.request_metho D, "", Ngx.var.scheme, "://", Ngx.var.host, Request_uri_without_args,} rule["Query"] = rule["Query"] or {} If default["query"] then to key, value in pairs (default["query"]) does If not rule["query"][key] then rule["Query"][key] = value End End End local query = {} Local args = Ngx.req.get_uri_args () for name, value in pairs (rule["Query" ]) do if type (value) = = "function" Then value = value (Args[name]) End If V Alue = = True Then value = Args[name] End If value then query[name] = V Alue ElseIf Args[name] then return end query = Ngx.encode_args (quer Y) if query ~= "then key[#key + 1] ="? " key[#key + 1] = Query End key = Table.concat (key) key = Ngx.md5 (key) Ngx.var.phoenix_key = key Local now = Ngx.time () if Ngx.var.arg_phoenix = = True Then Ngx.var.phoenix_fetch_skip = 0 else for i = 0, 1 do local errors = status:get_Errors (index, now-i *) If errors >= max_errors then Ngx.var.phoenix_fetch_skip = 0 Break End end Local uses = Status:incr_uses (key, 1) If uses >= min_uses then local timestamp = Status:get_timestamp (key) if Now-timestamp >= Expire then Ngx.var.phoenix_store_skip = 0 End End Break endend
Look at the contents of the "content.lua" file, which is read and written primarily through the Resty library memcached:
Local memcached = require "resty.memcached" local status = Require "status" local status = Status:new (ngx.shared.phoenix_ Status) Local key = Ngx.var.arg_keylocal index = ngx.crc32_long (key)% #phoenix ["memcached"] + 1local config = phoenix["Me Mcached "][index]local default = phoenix[" memcached "[" Default "]local host = config[" host "] or default[" host "]loc Al Port = config["Port"] or default["port"]local timeout = config["Timeout"] or default["Timeout"]local keep Alive = config["KeepAlive"] or default["keepalive"]local memc, err = memcached:new () if not MEMC then Ngx.log (NGX. Err, Err) ngx.exit (NGX. http_service_unavailable) endif timeout then memc:set_timeout (timeout) endlocal OK, err = memc:connect (host, port) if not OK then Ngx.log (NGX. Err, Err) ngx.exit (NGX. Http_service_unavailable) Endlocal method = Ngx.req.get_method () if method = = "Get" then local res, flags, err = Memc:get (key) If Err then Ngx.log (NGX. Err, Err) ngx.exit (NGX. Http_service_unavailable) End If res = = Nil and flags = = nil and err = nil then ngx.exit (NGX. Http_not_found) End Ngx.print (res) ElseIf method = = "PUT" then local value = Ngx.req.get_body_data () Local EXPI Re = Ngx.var.arg_expire or 86400 local OK, err = Memc:set (key, value, expire) if not OK then Ngx.log (NGX. Err, Err) ngx.exit (NGX. http_service_unavailable) End Status:set_timestamp (key) Else Ngx.exit (NGX. http_not_allowed) endif type (keepalive) = = "Table" Then if keepalive["idle"] and keepalive["size"] then memc:set_ KeepAlive (keepalive["Idle"], keepalive["size"]) endend
Look at the contents of the "failover.lua" file in order to activate the disaster recovery mode in case of an error:
Ngx.req.set_uri_args (Ngx.var.args. "&phoenix") Ngx.req.set_uri (Ngx.var.uri, True)
In addition, there is a "status.lua" file:
Local status = {}local Get_timestamp_key = function (key) key = {"Phoenix", "status", "Timestamp", Key,} Return Table.concat (Key, ":") endlocal Get_uses_key = function (key, timestamp) key = {"Phoenix", "status", "uses ", Key, Os.date ("%y%m%d%h%m ", timestamp),} return Table.concat (Key,": ") endlocal Get_errors_key = function (Key, Tim Estamp) key = {"Phoenix", "status", "Errors", Key, Os.date ("%y%m%d%h%m", timestamp),} return Table.conca T (Key, ":") endlocal get = function (Shared, key) return Shared:get (key) endlocal set = function (Shared, key, value, Expir e) return Shared:set (key, value, expire or 86400) endlocal incr = function (Shared, key, value, expire) value = value or 1 Local counter = SHARED:INCR (key, value) if not counter then Shared:add (key, 0, expire or 86400) c Ounter = SHARED:INCR (key, value) end return counterendfunction status:new (shared) return setmetatable ({shared = s Hared}, {__index = self}) endfunction Status:get_timestamp (key) return get (self.shared, Get_timestamp_key (key)) or 0endfunction status:set_ Timestamp (key, value, expire) key = Get_timestamp_key (key) value = value or Ngx.time () return set (self.shared, Ke Y, value, expire) endfunction status:get_uses (key, timestamp) timestamp = timestamp or ngx.time () key = Get_uses_key ( Key, timestamp) return get (self.shared, key) or 0endfunction status:incr_uses (key, value, expire) key = Get_uses_key (Key, Ngx.time ()) value = value or 1 return incr (self.shared, key, value, expire) endfunction status:get_errors (Key, Timestamp) timestamp = timestamp or ngx.time () key = Get_errors_key (key, timestamp) return get (self.shared, key) or 0endfunction status:incr_errors (key, value, expire) key = Get_errors_key (key, Ngx.time ()) value = value or 1 r Eturn incr (self.shared, key, value, expire) Endreturn status
One last question: How do I tell if the cache is in effect? Try the following command:
shell> curl-v "Http://foo.com/test?x=123&y=abc" < x-srcache-fetch-status:hit< x-srcache-store-status: BYPASS
At present, I mainly use Srcache to cache some of the interface JSON result set, these interfaces also support JSONP, that is, the client to pass a callback parameter and so on, you should understand, at this time if the non-differentiated cache, Then there are callback and no callback call results are to be saved, memory consumption directly doubled, but actually their content is similar, so in the actual application, we should only cache the data without callback, and for callback request, You can use Xss-nginx-module to fix it.
About activating Srcache before and after the performance comparison, depending on the environment will be different, but it is definitely an order of magnitude of ascension, more importantly, all of this is completely transparent to the business layer, don't be stunned, try it!
Nginx Cache Solution: Srcache