In a mobile app market project, the most requested function is to query the upgrade interface. Specifically, the client occasionally sends the app name and application version of the mobile phone to the server, the server compares the version to determine whether the client application needs to be upgraded. If necessary, it returns several related information. Usually, dozens to hundreds
In a mobile app market project, the most requested function is to query the upgrade interface. Specifically, the client occasionally sends the app name and application version of the mobile phone to the server, the server compares the version to determine whether the client application needs to be upgraded. If necessary, it returns several related information. Usually, dozens to hundreds
In a mobile app market project, the most requested function is to query the upgrade interface. Specifically, the client occasionally sends the app name and application version of the mobile phone to the server, the server compares the version to determine whether the client application needs to be upgraded. If necessary, it returns several related information. Generally, a mobile phone can contain dozens or hundreds of applications. When a massive number of clients send requests together, the pressure on the server can be imagined.
The data format requested by the client is as follows, and the GET or POST method can be used:
packages=foo|1&packages=bar|2&packages=
|
&...
The server selects Lua as the programming language and uses Redis's PIPELINING mechanism to query data in batches:
local redis = require "resty.redis"local cjson = require "cjson"local config = require "config"ngx.header["Content-Type"] = "application/json; charset=utf-8"local args = ngx.req.get_uri_args(1000)if ngx.var.request_method == "POST" then ngx.req.read_body() for key, val in pairs(ngx.req.get_post_args(1000)) do args[key] = val endendif type(args["packages"]) == "string" then args["packages"] = {args["packages"]}endif type(args["packages"]) ~= "table" then ngx.exit(ngx.HTTP_BAD_REQUEST)endlocal cache = redis.new()local res, err = cache:connect(config.host, config.port)if not res then ngx.log(ngx.ERR, "error: ", err) ngx.exit(ngx.HTTP_SERVICE_UNAVAILABLE)endcache:init_pipeline()local packages = {}for _, val in ipairs(args["packages"]) do if type(val) == "string" then local name, version = string.match(val, "([^|]+)|([0-9]+)") if name and version then packages[name] = tonumber(version) cache:hget(name, "all") end endendlocal res, err = cache:commit_pipeline()if not res then ngx.log(ngx.ERR, "error: ", err) ngx.exit(ngx.HTTP_SERVICE_UNAVAILABLE)endlocal data = {}for _, val in ipairs(res) do if type(val) == "string" then val = cjson.decode(val) if packages[val["name"]] < val["version"] then data[#data + 1] = val end endendngx.say(cjson.encode(data))
Supplement: The application data is saved as HASH in Redis, but HGETALL consumes a lot of CPU, so some processing is done: redundancy saves a field named "all, the HGETALL operation whose complexity is O (N) is converted into an HGET operation whose complexity is O (1), thus improving the efficiency. For details, see record Redis's HGETALL.
The code above has been running smoothly for a period of time, but with the increase in access traffic, problems are exposed: Redis gets stuck from time to time, the reason is that single-threaded Redis cannot carry too large PIPELINING requests. We usually use multi-instance methods to avoid the performance bottleneck of a single Redis thread. However, in this example, due to the large PIPELING, the problem can be solved without any redundant instances, at the same time, the system does not have much memory available.
The final solution we came up with is to use Nginx/Lua to lose weight for Redis PIPELINING. Specifically, when the client queries the upgrade interface, although up to hundreds of applications are sent to the server at the same time, there are few truly upgraded applications. If we can filter out the unupgraded applications and only query the upgraded applications, undoubtedly, the system performance will be greatly improved. In the image, when a fat request is filtered, it becomes a thin request. In actual operation, we can cache the application version to the Nginx/Lua shared memory. The client requests are filtered in the Nginx/Lua shared memory first, and then judge whether to query Redis.
To use shared memory, declare in the Nginx configuration file:
lua_shared_dict versions 100m;
The improved code is as follows. Pay attention to the Code for querying and setting shared memory:
local redis = require "resty.redis"local cjson = require "cjson"local config = require "config"local versions = ngx.shared.versions;ngx.header["Content-Type"] = "application/json; charset=utf-8"local args = ngx.req.get_uri_args(1000)if ngx.var.request_method == "POST" then ngx.req.read_body() for key, val in pairs(ngx.req.get_post_args(1000)) do args[key] = val endendif type(args["packages"]) == "string" then args["packages"] = {args["packages"]}endif type(args["packages"]) ~= "table" then ngx.exit(ngx.HTTP_BAD_REQUEST)endlocal cache = redis.new()local res, err = cache:connect(config.host, config.port)if not res then ngx.log(ngx.ERR, "error: ", err) ngx.exit(ngx.HTTP_SERVICE_UNAVAILABLE)endcache:init_pipeline()local packages = {}for _, val in ipairs(args["packages"]) do if type(val) == "string" then local name, version = string.match(val, "([^|]+)|([0-9]+)") if name and version then version = tonumber(version) if not versions[name] or versions[name] > version then packages[name] = version cache:hget(name, "all") end end endendlocal res, err = cache:commit_pipeline()if not res then ngx.log(ngx.ERR, "error: ", err) ngx.exit(ngx.HTTP_SERVICE_UNAVAILABLE)endlocal data = {}for _, val in ipairs(res) do if type(val) == "string" then val = cjson.decode(val) if packages[val["name"]] < val["version"] then data[#data + 1] = val end local timeout = math.random(600, 1200) versions:set(val["name"], val["version"], timeout) packages[val["name"]] = nil endendfor name in pair(packages) do versions:set(name, -1, 1800)endngx.say(cjson.encode(data))
Some may question: in essence, the request volume has not been reduced, but it has been transferred from Redis to Nginx. Can this improve performance? The key to the problem is that Redis is single-threaded, and Nginx can use the worker_processes command to make full use of the multi-core CPU capabilities, so the overall performance is improved.
Supplement: when setting the shared memory expiration time in the code, a random number is used instead of a fixed value. This design aims to avoid the expiration of a large amount of data at the same time, system Performance jitter.
...
Of course, as the number of visits increases, the solution in this article may still have problems. What should I do then? In fact, functions such as query upgrade should not be designed as synchronous. If it can be changed to asynchronous mode, most problems will not exist, but this will not be repeated, the current solution is just enough.
Original article address: Use Nginx/Lua to lose weight for Redis PIPELINING. Thank you for sharing it with me.