Ehcache memcache redis three major cache tweeters

Source: Internet
Author: User
Tags redis cluster value store
Ehcache memcache redis three major cache voices 10500 people read comments (2) collect reports

Recently, the project team used these three caches and looked at them on their respective official websites. I think they have their own merits! The advantages and disadvantages of each cache are summarized here for reference only!

Ehcache

It is widely used in Java projects. It is an open-source cache solution designed to improve the high cost and high latency of data retrieved from RDBMS. Because ehcache is robust (developed based on Java), certified (with Apache 2.0 license), and full of features (will be detailed later ), therefore, it is used in various nodes of large-scale, complex, and distributed Web applications.

What are the features?

 

1. fast enough

Ehcache has been released for a period of time. After several years of efforts and countless performance tests, ehcache was finally designed for large, high concurrency systems.

 

2. Simple enough

The interfaces provided by developers are very simple and clear. It takes only a few minutes to build and run ehcache. In fact, many developers do not know how to use ehcache, which is widely used in other open-source projects.

For example, Hibernate

 

3. Small enough

About this feature, the official gave a very cute name small foot print, generally the release of ehcache is not 2 m, V 2.2.3 is 668kb.

 

4. lightweight enough

The core program only depends on the slf4j package!

 

5. good scalability

Ehcache provides memory and hard disk storage for big data. The latest version allows high flexibility of multiple instances and storage objects, and provides LRU, LFU, and FIFO elimination algorithms, basic Attributes support hot configuration and many plug-ins

 

6. Listener

Cachemanagerlistener and cacheevenlistener are useful for statistics or data consistency broadcast.

How to use it?

Simple enough is a major feature of ehcache. It is natural to use just so easy!

Paste a piece of basic code

CacheManager manager = CacheManager.newInstance("src/config/ehcache.xml");Ehcache cache = new Cache("testCache", 5000, false, false, 5, 2);cacheManager.addCache(cache);
The Code contains an ehcache. xml file. Here are some attributes of this file.
  1. Name: cache name.
  2. Maxelementsinmemory: Maximum number of caches.
  3. Eternal: whether the object is permanently valid. Once set, timeout does not work.
  4. Timetoidleseconds: sets the allowable idle time before the object expires (unit: seconds ). This attribute is optional only when the eternal = false object is not permanently valid. The default value is 0, indicating that the idle time is infinite.
  5. Timetoliveseconds: sets the object survival time before it expires. The maximum time is between the creation time and the expiration time. It is used only when the eternal = false object is not permanently valid. The default value is 0. That is, the Infinity between objects.
  6. Overflowtodisk: when the number of objects in the memory reaches maxelementsinmemory, ehcache writes the objects to the disk.
  7. Diskspoolbuffersizemb: this parameter sets the cache size of diskstore (disk cache. The default value is 30 mb. Each cache should have its own buffer.
  8. Maxelementsondisk: Maximum number of Hard Disk caches.
  9. Diskpersistent: Indicates whether to cache data during VM restart. Whether the disk store persists between restarts of the virtual machine. The default value is false.
  10. Diskexpirythreadintervalseconds: Specifies the interval for running a disk failure thread. The default value is 120 seconds.
  11. Memorystoreevictionpolicy: When the maxelementsinmemory limit is reached, ehcache clears the memory according to the specified policy. The default policy is LRU. You can set it to FIFO or LFU.
  12. Clearonflush: whether to clear when the maximum number of memory is exceeded.

 

Memcache

Memcache is a high-performance, distributed object cache system designed to ease the delay of loading data to dynamic website databases. You can think of it as a large memory hashtable, is a key-value Key-value cache. Danga interactive, developed for livejournal, releases a set of Open Source Software with BSD license.

 

1. Dependency

Memcache C language depends on the latest version of GCC and libevent. GCC is its compiler. colleagues perform socket Io Based on libevent. When installing memcache, ensure that your system colleagues have these two environments.

 

2. multithreading support

Memcachesupports multiple CPU workers at the same time. It is specified in threads.txt under the memcacheinstallation file. By default, memcached is compiled as a single-threaded application. by default, It is compiled and installed in a single thread. If you need multiple threads, You need to modify them. /configure -- enable-threads. To support multi-core systems, make sure that your system has a multi-threaded working mode. The default number of threads for multithreading is 4. If the number of threads exceeds the number of CPUs, the Operation deadlock may occur. You can make the best use of your business model.

 

3. High Performance

Using libevent to complete socket communication, the theoretical performance bottleneck falls on the NIC.

 

Simple installation:

1. Download memcached and libevent respectively and put them in the/tmp directory:

# Cd/tmp

# Wget http://www.danga.com/memcached/dist/memcached-1.2.0.tar.gz

# Wget http://www.monkey.org /~ Provos/libevent-1.2.tar.gz

2. Install libevent first:

# Tar zxvf libevent-1.2.tar.gz

# Cd libevent-1.2

#./Configure-Prefix =/usr

# Make (install GCC first if it is prompted that GCC is not installed)

# Make install

3. test whether the libevent is successfully installed:

# Ls-Al/usr/lib | grep libevent

Lrwxrwxrwx 1 Root 21 11 ?? Libevent-1.2.so.1> libevent-1.2.so.1.0.3

-Rwxr-XR-x 1 Root 263546 11 ?? Libevent-1.2.so.1.0.3

-RW-r-1 Root 454156 11 ?? 12 libevent.

-Rwxr-XR-x 1 Root 811 11 ?? Libevent. La

Lrwxrwxrwx 1 Root 21 11 ?? 12 libevent. So-> libevent-1.2.so.1.0.3

Not bad. They all have been installed.

4. Install memcached and specify the libevent installation location during installation:

# Cd/tmp

# Tar zxvf memcached-1.2.0.tar.gz

# Cd memcached-1.2.0

#./Configure-with-libevent =/usr

# Make

# Make install

If an error is reported in the middle, check the error information carefully and configure or add the corresponding library or path according to the error information.

After the installation is complete, the memcached will be placed in/usr/local/bin/memcached,

5. Test whether memcached is successfully installed:

# Ls-Al/usr/local/bin/MEM *

-Rwxr-XR-x 1 Root 137986 11 ?? 12 :39/usr/local/bin/memcached

-Rwxr-XR-x 1 Root 140179 11 ?? 12 :39/usr/local/bin/memcached-Debug

Start memcache Service

 

Start memcached:

 

1. Start the memcache Server:

#/Usr/local/bin/memcached-D-M 8096-u root-l 192.168.77.105-P 12000-C 256-P/tmp/memcached. PID

-D option is to start a daemon,

-M indicates the amount of memory allocated to memcache. The unit is mb. Here is 8096 MB,

-U is the user who runs memcache. Here I am root,

-L is the IP address of the listening server. If there are multiple IP addresses, I have specified the IP address 192.168.77.105,

-P is the port for memcache listening. I have set port 12000 here, preferably port 1024 or above,

-The "C" option is the maximum number of concurrent connections. The default value is 1024. I have set 256 here, which is based on the load of your server,

-P is the PID file for saving memcache. Here I save it in/tmp/memcached. PID,

 

2. to end the memcache process, run:

# Cat/tmp/memcached. PID or PS-Aux | grep memcache (find the corresponding process ID)

# Kill process ID

You can also start multiple daemon processes, but the ports cannot be repeated.

Memcache connection

Telnet IP Port

Before connection, add the memcache firewall rules to the memcache server.

-A RH-Firewall-1-INPUT-M state -- state new-m tcp-p tcp -- dport 3306-J accept

Reload firewall rules

Service iptables restart

OK. Now you can connect to memcache.

Enter stats on the client to view the memcache status.

 

PID memcache server process ID

Number of seconds that the uptime server has run

Current UNIX timestamp of the Time Server

Version memcache version

Pointer_size pointer size of the current operating system (32-bit systems are generally 32bit)

Cumulative user time of the rusage_user Process

Cumulative system time of the rusage_system Process

Number of currently stored items on the curr_items Server

Total number of items stored after total_items is started from the server

Bytes the number of bytes occupied by the current server storage items

Curr_connections current number of opened connections

Total_connections Number of connections that have been opened since the server was started

Connection_structures server-assigned connection constructor count

Cmd_get command (get) Total number of requests

Cmd_set set command (SAVE) Total number of requests

Total get_hits hits

Total number of get_misses missed

The number of items deleted by evictions to get idle memory (after the space allocated to memcache is used up, you need to delete the old items to get the space allocated to the new items)

Bytes_read (number of bytes requested)

Bytes_written total number of sent bytes (number of result bytes)

The memory size (in bytes) allocated to memcache by limit_maxbytes)

Threads current thread count

Redis 

Redis was written after memcache. We often compare the two. If it is a key-value store, it has a wide range of data types, I want to temporarily call it a cache data stream center, just like the current logistics center, such as order, package, store, classification, distribute, and end. The currently popular lamp PHP architecture does not know how to compare the performance of redis + MySQL or redis + MongoDB (I heard from the group that MongoDB fragments are unstable ).

Let's talk about the features of reidis.

 

1. Support persistence

Redis supports two local persistence Methods: RDB and Aof. RDB in redis. configure the persistence trigger in the conf configuration file. aof indicates that no record is added to redis and will be saved to the persistence file (the command for generating this record is saved ), if you don't use redis for dB, you don't need to use Aof. The data is too large. It is a huge project to restart and recover!

 

2. Rich Data Types

Redis supports multiple data types such as string, lists, sets, sorted sets, and hashes. Sina Weibo uses redis for nosql, which also has these types, time sorting, function sorting, my Weibo, list and sorted set

Is closely related to the powerful operation functions

 

3. High Performance

As you can imagine with memcache, the memory operation level is millisecond-level, which is naturally more efficient than the hard disk operation-level operations, with fewer high-end operations such as head tracing, Data Reading, and page switching! This is also the reason why nosql emerged. It should be high performance.

It is a derivative product based on RDBMS. Although RDBMS also has a cache structure, it is always not what we want to control at the app level.

 

4. Replication

Redis provides a master-slave replication solution, which is similar to MySQL in incremental replication and replication implementation. This replication is somewhat similar to aof in that it replicates New Record commands, when a new record is added to the master database, the new script is sent to the slave database, and records are generated from the slave database based on the script. This process is very fast. It depends on the network. Generally, the master and slave databases are in the same LAN, so it can be said that redis master-slave synchronization is almost timely, and its colleagues also support one master, multiple slaves, dynamic addition of slave databases, there is no limit on the number of slave databases. I think the master-slave database is still built in a mesh mode. If you use a chain (master-slave ·), if the first slave crashes and restarts, first, the data recovery script is received from the Master. This is blocked. If the data in the master database is several TB, the restoration process takes some time. In this process, other slave instances cannot be synchronized with the master database.

 

5. Fast update

It seems that from my access to redis, four major versions have been released so far, and the minor version has not been counted. The author of apsaradb for redis is a very positive person. Whether it is an email or a forum post, he can answer your questions in a timely and patient manner, with a high degree of maintenance. If someone maintains it, we can use it with peace of mind and peace of mind. At present, the author leads the development of redis in the direction of redis clusters.

Redis Installation

 

The installation of redis is actually quite simple. In general, there are three steps: Download the tar package, decompress the tar package, and install it.

However, I recently encountered an installation problem when I used centos 5.5 32bit after 2.6.7. Below I will share with you the problems encountered during the installation process, executing make in the redis folder has the following error: Undefined reference to '_ sync_add_and_fetch_4'

Internet found a lot of the last in the https://github.com/antirez/redis/issues/736 to find a solution, write cflags =-March = i686 on SRC/makefile head!

Remember to delete the file that failed to be installed, decompress the new installation file, modify the MAKEFILE file, and then make the installation. You won't find the original error.

Some attribute comments and basic types of operations on redis are described in detail in the appealer of redis in the previous article, so it is no longer redundant here (the essence is to be lazy, haha !)

 

Finally, putting memcache and redis together will have to make people think of the comparison between the two. Who can use them? The group has been fighting for this for a long time, I will share with you what I see here.

After someone else posted a memcache with much better performance than redis, redis author antirez published a blog post about how to perform stress tests on redis and memcache, some people mentioned in this article said that many open source software should be thrown into the toilet, because their stress test script is too 2, and the author explained this. Redis vs memcache is definitely an apple to apple comparison. Well, it's quite clear that the comparison between the two is not a bit of an effect of picking bones with eggs. The author has done three tests in the same runtime environment to obtain good values. The results are as follows:

We need to declare the data in the process of processing a single core. memcache supports multi-core and multi-threaded operations (not enabled by default), so it is of reference significance by default, if so, memcache is faster than redis. Why does redis not support multi-thread and multi-core processing? The author also expressed his opinion that, first of all, multithreading remains the same as bug fixing, but it is not easy to expand software, and data consistency problems because all redis operations are atomic operations, the author uses the word "maid! Of course, it does not support multi-threaded operations, and it certainly has its drawbacks. For example, the performance must be poor. The author has been focusing on redis Cluster Development Since version 2.2 to alleviate its performance drawbacks, to put it bluntly, the vertical direction is not good, and the horizontal direction is improved.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.