Copyright NOTICE: This article is the original article, without the owner's permission to reprint.
recently, the project team used these three caches and went to their respective authorities to see if they were really different. Today deliberately summed up the advantages and disadvantages of each cache, for reference only.
Ehcache
Widely used in Java projects. It is an open source, designed to increase the cost of the data from the RDBMS to take out the expensive, high latency of a caching scheme. Because Ehcache is robust (based on Java Development), certified (with Apache 2.0 license), and featured (described later), it is used in various nodes of a large, complex, distributed Web application.
What features.
1. Fast enough
Ehcache's release has been a long time, after several years of efforts and countless performance testing, Ehcache was designed in large, high concurrency systems.
2. Simple enough
The interface provided by the developer is very simple and straightforward, and it only takes a few minutes for you to build and run from Ehcache. In fact, many developers do not know that they use Ehcache,ehcache is widely used in other open source projects
For example: Hibernate
3. Pocket enough.
In this regard, the official gave a very lovely name small foot print, the general Ehcache release version will not go to 2m,v 2.2.3 668KB.
4. Light weight
The core program relies only on SLF4J this one package, not one.
5. Good extension
Ehcache provides storage of large data memory and hard disk, recent versions allow multiple instances, save objects high flexibility, provide LRU, LFU, FIFO elimination algorithm, basic properties to support thermal configuration, supported Plug-ins more
6. Listener
Cache Manager listeners (Cachemanagerlistener) and cache listeners (Cacheevenlistener), do some statistical or data-consistent broadcast pretty good.
How to use.
Simple enough is a major feature of Ehcache, naturally used just so easy!
Post a basic code of use
CacheManager manager = cachemanager.newinstance ("Src/config/ehcache.xml");
Ehcache cache = new Cache ("Testcache", 5000, False, False, 5, 2);
Cachemanager.addcache (cache);
There is a ehcache.xml file in the code, now let's introduce some of the properties in this file
name: Cache name. maxelementsinmemory: Maximum cache count. eternal: The object is permanently valid, but the timeout will not work. timetoidleseconds: Sets the allowable idle time (in seconds) for an object before it expires. Only if the Eternal=false object is not permanently valid, the optional attribute, the default value is 0, which means that the idle time is infinitely large. timetoliveseconds: Sets the time to allow the object to survive before it expires, between the creation time and the expiration time. Used only when the Eternal=false object is not permanently valid, the default is 0, which means that the object survives indefinitely. overflowtodisk: When the number of objects in memory reaches Maxelementsinmemory, Ehcache writes objects to disk.        DISKSPOOLBUFFERSIZEMB: This parameter sets the buffer size of the Diskstore (disk cache). The default is 30MB. Each cache should have a buffer of its own. maxelementsondisk: Maximum number of hard disk cache. diskpersistent: Cache virtual Machine Restart period Data Whether the disk store persists between restarts of the virtual machine. the Default value is&nbSp;false. diskexpirythreadintervalseconds: Disk failed thread run time interval, default is 120 seconds. memorystoreevictionpolicy: When the maxelementsinmemory limit is reached, Ehcache will clean up the memory according to the specified policy. The default policy is LRU. You can set it as FIFO or LFU. clearonflush: Clear when the maximum amount of memory is available.
memcache
Memcache is a high-performance, distributed object caching system, originally designed to ease the dynamic Web site database loading data delay, you can think of it as a large memory hashtable, is a Key-value key-value cache. Danga Interactive A set of open source software released by BSD license for LiveJournal development.
1. Reliance on
Written in Memcache C, relies on the most recent version of GCC and libevent. GCC is its compiler, and colleagues do socket IO based on libevent. Ensure that your system colleagues have both environments when installing memcache.
2. Multithreading support
Memcache supports multiple CPUs working at the same time, there is a special description under the Memcache installation file called Threads.txt, by default, Memcached is compiled as a single-threaded Application. The default is a single-threaded compilation installation, which you need to modify if you need multiple threads./configure--enable-threads, in order to support multi-core systems, the prerequisite is that your system must have multi-threaded working mode. The number of threads that turn on multithreaded work defaults to 4, if the number of threads exceeds the number of CPUs it is easy to manipulate the probability of deadlock. The combination of their own business model to choose to make the best possible.
3. High Performance
Through the libevent to complete the socket communication, theoretically the bottleneck of performance on the network card.
Simple installation:
1. Download the memcached and libevent separately and put them in the/tmp directory:
# cd/tmp
# wget http://www.danga.com/memcached/dist/memcached-1.2.0.tar.gz
# wget http://www.monkey.org/~provos/libevent-1.2.tar.gz
2. Install Libevent First:
# tar ZXVF libevent-1.2.tar.gz
# CD libevent-1.2
#./CONFIGURE-PREFIX=/USR
# make (if GCC is not installed, install GCC first)
# make Install
3. Test whether the Libevent is installed successfully:
# Ls-al/usr/lib | grep libevent
lrwxrwxrwx 1 root root 21 11?? 17:38 libevent-1.2.so.1-> libevent-1.2.so.1.0.3
-rwxr-xr-x 1 root root 263546 11?? 17:38 libevent-1.2.so.1.0.3
-rw-r-r-1 root root 454156 11?? 17:38 LIBEVENT.A
-rwxr-xr-x 1 root root 811 11?? 17:38 libevent.la
lrwxrwxrwx 1 root root 21 11?? 17:38 libevent.so-> libevent-1.2.so.1.0.3
It's all right, it's all set up.
4. Install memcached, also need to install the designated Libevent installation location:
# cd/tmp
# tar ZXVF memcached-1.2.0.tar.gz
# CD memcached-1.2.0
#./CONFIGURE-WITH-LIBEVENT=/USR
# make
# make Install
If there is an error in the middle, please check the errors carefully, and configure or add the corresponding libraries or paths according to the error message.
When the installation is complete, the memcached will be placed in the/usr/local/bin/memcached,
5. Test for successful installation of memcached:
# ls-al/usr/local/bin/mem*
-rwxr-xr-x 1 root root 137986 11?? 17:39/usr/local/bin/memcached
-rwxr-xr-x 1 root root 140179 11?? 17:39/usr/local/bin/memcached-debug
Start the Memcache service
Start the memcached service:
1. Start the Memcache server side:
#/usr/local/bin/memcached-d-M 8096-u root-l 192.168.77.105-p 12000-c 256-p/tmp/memcached.pid
The-D option is to start a daemon,
-M is the amount of memory allocated to Memcache, in megabytes, I am 8096MB here,
-U is the user running memcache, I am here root,
-L is a listening server IP address, if there are more than one address, I specify the IP address of the server 192.168.77.105,
-P is the port that sets the memcache listening, I set 12000 here, preferably 1024 or more ports,
The-c option is the maximum number of concurrent connections to run, the default is 1024, I set 256 here, according to the load of your server to set,
-P is set to save the Memcache pid file, which I am here to save in/tmp/memcached.pid,
2. If you want to end the memcache process, execute:
# Cat/tmp/memcached.pid or Ps-aux | grep memcache (find the corresponding process ID number)
# Kill Process ID Number
You can also start multiple daemons, but the ports cannot be duplicated.
Memcache Connection
Telnet IP Port
Note that before you connect, you need to memcache the server to memcache firewall rules Plus
-A rh-firewall-1-input-m state--state new-m tcp-p TCP--dport 3306-j ACCEPT
Reload firewall rules
Service Iptables Restart
OK, we should be able to connect to the Memcache now.
View memcache status information on client input stats
PID memcache The process ID of the server
Number of seconds the uptime server has been running
Time server's current UNIX timestamp
Version Memcache Edition
Pointer_size the current operating system's pointer size (32-bit system is typically 32bit)
Cumulative user time for Rusage_user processes
Cumulative system time for Rusage_system processes
Number of items currently stored by the Curr_items server
Total_items total number of items stored since server startup
Bytes The number of bytes consumed by the current server storage items
Curr_connections the number of connections currently open
Total_connections number of connections that have been opened since the server was started
Number of connection constructs that the Connection_structures server assigns
Cmd_get get Command (gets) the total number of requests
Cmd_set set Command (save) number of total requests
Total hit times of get_hits
Get_misses Total missed Hits
Number of items deleted for evictions to get free memory (the space assigned to memcache needs to be removed from the old items to get space allocated to new items)
Bytes_read bytes read (number of bytes requested)
Bytes_written Total Bytes Sent (result bytes)
Limit_maxbytes amount of memory allocated to Memcache (bytes)
Threads the current number of threads
Redis
Redis was written after memcache, we often compare the two, if it is a key-value store, but it has a wealth of data types, I would like to temporarily call it the cache Data Flow Center, like the current object