is the enterprise using memcached or Redis?
When building a modern, database-driven Web application that wants to deliver better performance, this problem is always present and troubling to every developer. The caching mechanism is often an important starting point for solving problems when considering an improvement in the performance of the application, while memcached and Redis are often compared as preliminary schemes.
These two prestigious cache engines have many similarities, but they also have significant differences. As a younger and more flexible solution for both, Redis is seen by most technicians as the preferred target – but don't take it lightly, and important exceptions that cannot be overlooked are also objective.
I. Similarities between the two
Let's start with the similarities between the two. Both memcached and Redis belong to in-memory, key-value data storage scenarios. They all belong to the NoSQL family in the data management solution and are based on the same key-value data model. Both sides choose to keep all the data in memory, which naturally makes them a very ideal buffer layer implementation scheme. From a performance perspective, the two types of data storage mechanisms also have a number of commonalities, including the ability to have almost the same characteristics (and metrics), and a high degree of attention to workload data throughput and latency.
In addition to the same in-memory key-value data storage scheme, memcached and Redis are also very mature and popular open source projects. Memcached was originally developed by Brad Fitzpatrick in 2003, when its direct service was LiveJournal dating sites. After that, Memcached was re-written in C (which was originally implemented in the Perl language) and devoted to the public domain, where it evolved into the building blocks of modern web applications. The current development of the Memcached Project focuses on its operational stability and optimization effects, and is no longer actively creating more new features for it.
Redis was created by Salvatore Sanfilippo in 2009, and today Sanfilippo is still the role of the project's chief developer and sole maintainer. Redis is sometimes called an "enhanced version of Memcached". Given the vast amount of valuable lessons learned from memcached, such evaluations are not surprising. Redis outperforms memcached in terms of versatility, although it is more powerful and more flexible, but more complex than the latter.
As two solutions that have been adopted by many companies and deployed in countless mission-critical production environments, memcached and Redis have client libraries that provide support in any of the possible programming languages, and they are also included in a variety of libraries and packages used by developers. In fact, it's even hard to find a web stack that doesn't contain memcached or Redis built-in support mechanisms.
Why are memcached and Redis so popular? In addition to their excellent practical results, the two sides are extremely easy to start the difficulty is another big plus. Whether it's memcached or Redis, its ease of use is well known among developers. It only takes a few minutes for us to complete the installation and let them start working seamlessly with the application. In other words, with just a fraction of the time and effort, you can gain immediate and excellent performance improvements--specifically, performance will go directly to the new magnitude. Who can resist the temptation to face a solution so simple that it can bring huge benefits?
Two. When should I use memcached
Compared to memcached, Redis is available later and has more features, so developers often see it as a default preferred scenario. But there are two types of special scenes that are still the memcached of the world. The first is the caching of small, static data, and the most representative example is the snippet of HTML code. Memcached's internal memory management mechanism, though not as complex as Redis's, is more effective-because memcached consumes less memory resources when it processes metadata. As the only one by one data types supported by memcached, strings are ideal for storing data that only needs to be read, because the string itself does not need to be processed further.
In addition to this, Memcached has an advantage over redis in terms of scale-out. Because of its design-oriented mindset and relatively simple function settings, the memcached is much less difficult to extend than Redis. But according to what we've learned, there are a number of tested and effective programs that can extend Redis up to multiple servers, and its upcoming release 3.0 (interested friends can click here to see their candidate release notes) will include a built-in clustering mechanism specifically for scale-out scenarios.
three. When should I use Redis
Unless you need to consider a particular constraint (such as dealing with a traditional application) for memcached, or if your actual use case belongs to one of the two previously mentioned scenarios, choose Redis directly and use it. With the excellent caching solutions that Redis brings, we have the power to handle the details of cache content and persistence-and the efficiency of overall execution.
Redis shows obvious advantages in almost every aspect of the cache management effort. This caching scheme uses the so-called data recovery mechanism to remove stale data from memory to provide the necessary cache space for new data. Memcached's data recovery mechanism uses LRU (the lowest-current-use) algorithm, and tends to arbitrarily delete existing content that is similar to the new data system. In contrast, Redis allows users to refine their control more precisely, using six different recycling strategies to improve the actual utilization of cached resources. Redis also uses more sophisticated memory management and recycling object alternatives.
Redis also gives us the greatest amount of flexibility in space, ensuring that administrators have an ample platform to handle cached objects. In this regard, memcached restricts the key name to 250 bytes, and the value is limited to less than 1MB and only applies to normal strings. In contrast, Redis sets the maximum limit for key names and values to 512MB, and supports binary formats. Redis supports six data types, so it is more intelligent to cache and manipulate data, which is tantamount to opening the door to endless possibilities for application developers.
In contrast to saving an object as a serialized string, Redis allows developers to hash object fields and values in a hashed manner and manage them with a single key. The existence of the Redis hashing mechanism ensures that the developer does not have to go through the complex process of acquiring a complete string, deserialization, update value, Object re-serialization and using it to replace the full string within the cache after each value update-which also means reduced resource consumption and significant performance gains. Other data types supported by Redis, such as lists and sets--, can also be used to implement more complex cache management patterns.
Another important advantage of Redis is that the data it holds is transparent, meaning that the server can manipulate the data directly. There are more than 160 available commands in Redis, most of which are used to implement data processing operations and embed logic into the data storage system through server-side scripting. These built-in commands and user scripts provide a great deal of flexibility to help you complete data processing tasks directly within Redis-without having to move data back and forth between other specialized processing systems on the network.
Redis also offers optional and tailor-made data persistence schemes designed to re-boot cached content after planned outages or unplanned failures. While we are more inclined to emphasize the volatile and transient nature of the data in the cache, it is still very practical to persist the data on disk in some cache scenarios. This mechanism can quickly reload the data stored on disk into the cache after the device restarts, greatly reducing the cache warm-up cycle and re-evaluating the current cache content based on the master data store content.
Four. Summary
Last but not least, Redis is capable of providing replication capabilities. The replication feature is designed to help cache architectures implement high-availability configuration scenarios that continue to provide uninterrupted caching services to applications in the event of a failure. It is clear that a mature caching scenario should be able to have little or no impact on the user experience or application performance in the event of an application failure, and this strong assurance of caching content and service availability is, in most cases, a major advantage of caching solutions.
The open source software industry has been constantly working to bring us the most outstanding solutions in the field of technology today. And when it comes to using caching to improve application performance, Redis and memcached, as two of the most acclaimed and proven solutions, have naturally become two of the first technologies to accomplish this task. However, from the point of view of functional diversity and design advancement, Redis is clearly more suitable for everyone as a preferred solution for versatility-except for a few special scenarios.
Does the enterprise do data caching using memcached or Redis?