Ehcache memcache Redis Differences

Source: Internet
Author: User
Tags connection pooling memcached redis cluster

Prior to using Redis and memcache, no ehcache development experience, recently also consulted a lot of documents and blogs, write some summary, there are a lot of content summary and many bloggers in the blogger summary:

Ehcache

EhCache is a pure Java in-process caching framework that is fast, lean, and is the default Cacheprovider in Hibernate, so it is used in various nodes of large, complex distributed Web application. Ehcache is a widely used, open source Java distributed cache. Primarily for general purpose caches, Java EE and lightweight containers. It features memory and disk storage, cache loaders, cache extensions, cache exception handlers, a gzip cache servlet filter, and support for rest and soap APIs.

The main features are:

1. Fast

Ehcache's release has been a long time, after several years of effort and countless performance tests, Ehcache was eventually designed in large, high concurrency systems.

2. Simple

The interface provided by the developer is very simple and straightforward, and it only takes a few minutes of your time to build from Ehcache to use. In fact, many developers do not know that they use Ehcache,ehcache is widely used in other open source projects

For example: Hibernate

3. Enough light weight

The core program relies only on SLF4J this one package, not one! The release version of General Ehcache will not be 668KB to 2m,v 2.2.3.

4. Multiple Cache policies

Ehcache provides memory for big data and hard disk storage, and the cached data is written to disk during the virtual machine restart. Recent versions allow multiple instances, save objects with high flexibility, provide LRU, LFU, FIFO culling algorithms, base properties that support hot provisioning, support for multiple plug-ins

5. Listening interface with cache and cache manager

Cache Manager Listener (Cachemanagerlistener) and Cache Listener (Cacheevenlistener), do some statistics or data consistency broadcast very useful

6. Can be distributed cache through RMI, pluggable API, etc.

How do I use it?

Simple enough is a major feature of Ehcache, naturally used up just so easy!

Integrated use (Spring project):

A. Joining Ehcache-core-2.6.5.jar and Mybatis-ehcache-1.0.2.jar

B. Integration Ehcache:

configuration Mapper cache type ehcache pair interface implementation type

Sqlmapconfig.xml:

<!--turn on level two cache--

<setting name= "cacheenabled" value= "true"/>

Usermapper.xml:

 <mapper namespace= "Cn.itcast.mybatis.mapper.UserMapper" >

<!--Open the two cache under the namespace of this mapper
Type: Specifies the types of implementation classes for the cache interface, MyBatis by default using Perpetualcache
To integrate with Ehcache, you need to configure the type to implement the cache interface for Ehcache
-
<cache type= "Org.mybatis.caches.ehcache.EhcacheCache"/>

Configure the ehcache.xml under classpath

<ehcache xmlns:xsi= "Http://www.w3.org/2001/XMLSchema-instance"
Xsi:nonamespaceschemalocation= ". /config/ehcache.xsd ">
<diskstore path= "F:\develop\ehcache"/>
<defaultcache
maxelementsinmemory= "1000"
maxelementsondisk= "10000000"
Eternal= "false"
Overflowtodisk= "false"
Timetoidleseconds= "120"
Timetoliveseconds= "120"
Diskexpirythreadintervalseconds= "120"
memorystoreevictionpolicy= "LRU" >
</defaultCache>
</ehcache>

There is a ehcache.xml file in the code, now let's introduce some of the properties in this file.
  1. Name: Cache names.
  2. Maxelementsinmemory: Maximum number of caches.
  3. Eternal: The object is permanently valid, but if set, timeout will not work.
  4. Timetoidleseconds: Sets the allowable idle time (in seconds) for an object before it expires. An optional property is used only if the Eternal=false object is not permanently valid, and the default value is 0, which means that the idle time is infinite.
  5. Timetoliveseconds: Sets the time that an object is allowed to survive before it expires, with a maximum time between creation time and expiration time. Used only when the Eternal=false object is not permanently valid, the default is 0, which means that the object survives indefinitely.
  6. Overflowtodisk: When the number of objects in memory reaches Maxelementsinmemory, Ehcache writes the object to disk.
  7. DISKSPOOLBUFFERSIZEMB: This parameter sets the buffer size of the Diskstore (disk cache). The default is 30MB. Each cache should have its own buffer.
  8. Maxelementsondisk: Maximum number of hard disk caches.
  9. Diskpersistent: Whether the cache data is written to disk during the virtual machine restart process According to Whether the disk store persists between restarts of the VM. The default value is False.
  10. Diskexpirythreadintervalseconds: Disk failed thread run time interval, default is 120 seconds.
  11. Memorystoreevictionpolicy: When the maxelementsinmemory limit is reached, Ehcache will clean up the memory according to the specified policy. The default policy is LRU. You can set it to FIFO or LFU.
  12. Clearonflush: If the maximum amount of memory is cleared.

Memcache:

Memcache is a set of distributed cache systems

    By maintaining a unified, huge hash table in memory, it can be used to store data in a variety of formats, including images, videos, files, and the results of database retrieval. The simple thing is to call the data into memory and then read it from memory, which greatly improves the reading speed.

Working mechanism of Memcache:

First check whether the client's request data in memcached, if any, directly return the request data, no longer do any operation on the database, if the requested data is not in memcached, go to the database, the data obtained from the database to the client, At the same time, the data is cached to memcached (memcached client is not responsible, need the program explicitly implemented); Update the data in the memcached at the same time as the database is updated, ensuring consistency; When the allocated memcached memory space is exhausted, Uses the LRU (Least recently used, least recently used) policy plus the expiry expiration policy, and the failed data is replaced first, and then replaced with the most recently unused data

  memcached is a daemon (listener) that runs on one or more servers and receives client connections and operations at any time.

  Memcached using:

Written in the Memcache C language, relies on the latest versions of GCC and libevent.

Installation using: (Download memcached and libevent), Custom installation directory:/usr/local/memcached

#mkdir/usr/local/memcached

#cd/usr/local/memcached

Use the following command to download, need to network:

# wget http://www.danga.com/memcached/dist/memcached-1.2.0.tar.gz

# wget http://www.monkey.org/~provos/libevent-1.2.tar.gz

With LS to see if the download is complete, there will be two packages: Libevent-1.2.tar.gz and memcached-1.2.0.tar.gz

Install Libevent First (verify that GCC is installed, install GCC first if not installed)

# tar ZXVF libevent-1.2.tar.gz

# CD libevent-1.2

#./CONFIGURE-PREFIX=/USR

# make (GCC is installed first if you encounter a hint that GCC is not installed)

# make Install

To test whether the Libevent was installed successfully:

lrwxrwxrwx 1 root root 21 11?? 17:38 Libevent-1.2.so.1-libevent-1.2.so.1.0.3

-rwxr-xr-x 1 root root 263546 11?? 17:38 libevent-1.2.so.1.0.3

-rw-r-r-1 root root 454156 11?? 17:38 LIBEVENT.A

-rwxr-xr-x 1 root root 811 11?? 17:38 libevent.la

lrwxrwxrwx 1 root root 21 11?? 17:38 libevent.so-libevent-1.2.so.1.0.3

Installation Successful

Install memcached, and you need to install the installation location of the specified libevent:

# cd/tmp

# tar zxvf memcached-1.2.0.tar.gz

# CD memcached-1.2.0

#/configure-with-libevent=/usr

# make

# make install

If there is an error in the middle, please double check Follow the error message to configure or add the appropriate library or path.

When the installation is complete, the memcached is placed in/usr/local/bin/memcached,

Tests for successful installation memcached:

# ls-al/usr/local/bin/mem*

-rwxr-xr-x 1 root root 137986 17:39/usr/local/bin/memcached

-rwxr-xr-x 1 root root 140179 11?? 12 17 : 39/usr/local/bin/memcached-debug

Start memcached service:

 1. Start the server side of the memcache:

#/usr/local/bin/memcached-d-M 8096-u root-l 192.168.77.105-p 12000-c 256-p/tmp/memcached.pid

-d option is to start a daemon,

-M is the amount of memory allocated to Memcache, in megabytes, I here is 8096MB,

-U is the user running memcache, I here is root,

-L is the server IP address listening, if there are multiple addresses, I specify the server IP address 192.168.77.105,

-P is the port that sets Memcache listening, I set 12000 here, preferably more than 1024 ports,

The

-C option is the maximum number of concurrent connections to run, the default is 1024, I set 256 here, set according to the load on your server,

-P is a PID file that is set to save Memcache, which I am saving in/tmp/memcached.pid,

 

2. If you want to end the memcache process, execute:

# cat /tmp/memcached.pid or Ps-aux | grep memcache   (find the corresponding process ID number)

# Kill Process ID number

The

can also start multiple daemons, but the ports cannot be duplicated.

 memcache connection

Telnet  ip   port 

Note that you need to memcache the server to add the memcache firewall rule to

before connecting.

-A rh-firewall-1-input-m State--state new-m tcp-p TCP--dport 3306-j accept 

Reload firewall rules

Service iptables Restart

OK, you should now be able to connect to the Memcache

View memcache status information on client input stats

 

PID Memcache Process ID of the server

Uptime number of seconds the server has been running

Time Server Current UNIX timestamp

Version Memcache versions

Pointer_size the current operating system pointer size (32-bit system is generally 32bit)

Cumulative user time for the rusage_user process

Cumulative system time for the Rusage_system process

Curr_items the number of items currently stored by the server

Total_items The total number of items stored since the server was started

Bytes The number of bytes occupied by the current server storage items

Curr_connections the number of connections currently open

Total_connections number of connections that have been opened since the server was started

Connection_structures number of connection constructs allocated by the server

Cmd_get get Command (GET) total number of requests

Cmd_set set Command (SAVE) Total Request count

Get_hits Total Hit Count

Total number of get_misses misses

Evictions the number of items deleted for free memory (the space allocated to memcache needs to be removed when the old items are filled to get space allocated to the new items)

Bytes_read bytes read (number of requests bytes)

Bytes_written total number of bytes sent (result bytes)

Limit_maxbytes The amount of memory allocated to Memcache (bytes)

Threads Current number of threads

Redis
1.1. What is redis

Redis is an open-source high-performance key-value pair (key-value) database developed in the C language. It adapts to the storage requirements of different scenarios by providing multiple key-value data types, such as the following for the key-value data types supported by Redis so far:

String type

Hash type

List type

Collection type

The Ordered collection type.

2.2. application Scenarios for Redis

Caching (data queries, short connections, news content, product content, and so on). (Maximum use)

in the Distributed cluster architecture Session separation.

Chat room's online friends list.

task queue. (Seconds to kill, snapping,12306 , etc.)

Apply Leaderboards.

Website Access statistics.

Data expiration processing (can be accurate to milliseconds)

1.1. Redis installation

  Redis is a C language development, it is recommended to run on Linux , this use Centos6.4 as the installation environment. The installation of Redis needs to first download the source code to compile, compilation depends on the gcc Environment, if there is no gcc environment;

  need to install gcc:yum Install gcc-c++

  Download http://download.redis.io/releases/redis-3.0.0.tar.gz from official website

will redis-3.0.0.tar.gz Copy to /usr/local under

Decompression source

#tar-ZXVF redis-3.0.0.tar.gz

Go to the extracted directory to compile

#cd/usr/local/redis-3.0.0

#make

install to the specified directory , such as /usr/local/redis

#cd/usr/local/redis-3.0.0

#make Prefix=/usr/local/redis Install

Redis.conf

Redis.conf is a redis configuration file,redis.conf in the redis source directory.

Note Modify the port as the redis process ports , port default 6379.

Copy the configuration file to the installation directory

Enter the source directory with a copy of the configuration file redis.conf , and then copy it to the installation path

#cd/usr/local/redis

#mkdir conf

#cp/usr/local/redis-3.0.0/redis.conf/usr/local/redis/bin

List of folders under installation directory bin

redis3.0 New Redis-sentinel is a Redis cluster management tool that enables high availability

1.1.redis Boot

1.1.1. Front- End mode startup

running bin/redis-server directly will start in front-end mode, and the disadvantage of front-end mode startup is that the SSH Command window is closed redis-server This method is not recommended for the end of the program. such as:

1.1.2.Back-end mode startup

Modify The redis.conf configuration file, daemonize Yes to the backend mode to start.

execute the following command to start Redis:

Cd/usr/local/redis

./bin/redis-server./redis.conf

Redis Default Usage 6379 Port.

1.1. Connect redis standalone with Jedis1.1.1.jar Package If it is a project managed through MAVEN, add it in the Pom.xml file

Pom.xml

<dependency>

<groupId>redis.clients</groupId>

<artifactId>Jedis</artifactId>

<version>2.7.0</version>

</dependency>

If not MAVEN management. Add the Jia package manually (Commons-pool2-2-.3.jar and Jedis-2.7.0.jar)

by creating a single-instance Jedis object to connect to the Redis service, the following code

single instance connection Redis

@Test

Public void Testjedissingle () {

   Jedis Jedis = new Jedis ("192.168.101.3", 6379);

Jedis.set ("name", "Bar");

String name = Jedis.get ("name");

System. out. println (name);

Jedis.close ();

}

Connect using connection pooling

Redis connections are not shared through single-instance connections , you can use connection pooling to share redis connections, increase resource utilization, and use jedispool Connect the redis Service with the following code:

@Test

Public void Pool () {

Jedispoolconfig config = new jedispoolconfig ();

Maximum number of connections

Config.setmaxtotal (30);

Maximum number of connection idle

Config.setmaxidle (2);

Jedispool pool = New jedispool (config, "192.168.101.3", 6379);

Jedis Jedis = null;

Try   {

Jedis = Pool.getresource ();

Jedis.set ("name", "Lisi");

String name = Jedis.get ("name");

System. out. println (name);

}catch(Exception ex) {

Ex.printstacktrace ();

}finally{

if (Jedis! = null) {

Close connection

Jedis.close ();

}

}

}

Ehcache memcache Redis Differences

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.