A simple Redis application and a simple Redis Application
Requirement
I have heard of Redis for a long time, but I have never studied it. Because some friends needed to use it a while ago, I looked at it when I asked. I just recently met my own needs and I studied it.
My situation here is actually very simple:
I ran an application on a VPS and provided the short link function. The original implementation was to directly implement a reverse proxy on the web server and map it to the backend server through urlrewrite. This is of course no problem, and its functions can be fully implemented, but later I found that when I share short links with social networks, there will be a large number of short-term accesses, when all these accesses are pushed to the backend to query the database, there will be a little pressure-because my VPS configuration is very low, there is still a certain probability of a 50x error in the case of sudden high access.
Therefore, we consider using redis as a cache proxy. In fact, this kind of requirement can also be done with memcached, because persistence is not a must, and it is a big deal to retrieve it from the backend after restarting. But I am more interested in redis, so I still use redis.
Install configurations
Redis installation is very simple. I use debian and use apt-get install redis-server directly.
There is nothing to say about the configuration. Most of them use the default configuration, but it only imposes some restrictions on the memory usage-VPS resources are tight and need to be saved. The data size of short links is not large.
After the installation is complete, start it with service redis-server start. Redis-cli can be used to operate databases through commands.
Although I am familiar with Python, this simple application is too easy to work on the environment. Use the ready-made PHP directly.
First, you need to download and install a PHP redis database. The two most officially recommended PHP libraries are predis and phpredis. Although Visual Testing of phpredis written in C should provide better performance, for the convenience of the diagram, I still use PHP predis.
After downloading the latest stable version of Predis, run bin/create-single-file (php-cli must be installed in the system) to generate a separate Predis. php: Put this file in your project path. It is quite simple.
Function implementation
This cache function is simple:
Obtain the short link ID, and then use this ID to query in redis. If a URL is found, 302 is directly returned and redirected to this URL. If no redirect_url is found, query the backend, obtain the redirect_url returned by the 302 response, save the URL to redis, and perform 302 redirection.
It should be noted that the ID that fails to be queried to the backend should also be saved and Error 404 should be returned to avoid the error ID being continuously queried to the backend, however, the risk is that if the error ID is used later, it cannot be queried. Therefore, you also need to set a timeout value for the error ID and delete it after expiration so that you can re-query it.
The main part of the code is as follows:
function raise_404() { header('HTTP/1.1 404 Not Found'); header("status: 404 Not Found"); die();}function redirect($url) { header("Location: $url");}require 'Predis.php';$redisdb = new Predis\Client([ 'database' => 1,]);$query = (array) explode('/', $_SERVER['REQUEST_URI']);if (!isset($query[1]) || $query[1]=="") raise_404();$id = $query[1];$url = $redisdb->get($id);if (!isset($url)) { $url = get_redirect($id); $redisdb->set($id, $url); if ($url == "") { $redisdb->expire($id, 3600); }}if ($url != "") { redirect($url);}else { raise_404();}
Specifically, get_redirect is a function that queries the actual URL from the backend through curl. The specific implementation is not listed here.
After running for a while, the effect is quite obvious. From the LOG statistics, when the cache proxy processes 500 requests, the backend actually processes less than 100 requests, and over time, the number of cached IDs increases, there will be fewer and fewer requests to the backend.
After the experiment was successful, I cached the RSS function. However, this is complicated, but the principle is similar, but because there are too many fields, instead, we use the HashMap value Method of redis (supporting multiple types of values is a strong top of redis). In addition, we also add the Last-Modified and Etag support, and expect crawlers to be smart, these two things will be used to further reduce unnecessary traffic volumes.