When can we use the Ehcache cache?

Source: Internet
Author: User

Wen/Xiao Cheng story More (Jane book author)
Original link: http://www.jianshu.com/p/2cd6ad416a5a

First, what is Ehcache?

Ehcache is one of Hibernate's level two cache technology, which can store the queried data in memory or disk, save the next query to query the database again, greatly reduce the database pressure.

Ii. What are the use scenarios of Ehcache

1, the first most important is the page cache.
Web page data sources are very extensive, mostly from different objects, and possibly from different db, so caching a page is a good idea.
2, the cache of common data
Some configuration information, such as some infrequently changing settings in the background, can be cached.

Third, the use of ehcache points of attention

1, relatively few update data table case
2, the concurrency requirements are not very strict circumstances
Caches in multiple application servers cannot be synchronized in real time.
3, the consistency requirement is not high under the circumstances
Because the Ehcache local cache is not a good way to solve the problem of cache synchronization between different servers, we use the centralized cache such as Redis, memcached and so on when the consistency requirement is very high.

Iv. how the Ehcache behaves in a clustered, distributed situation

There are two modes of synchronization in distributed situations:
1. RMI Multicast mode

Example:

<cacheManagerPeerProviderFactory class="net.sf.ehcache.distribution.RMICacheManagerPeerProviderFactory" properties="peerDiscovery=automatic, multicastGroupAddress=localhost,        multicastGroupPort=4446,timeToLive=255"/>

Principle: When the cache changes, Ehcache sends RMI UDP multicast packets to the multicast IP address and port number.
Defects: Ehcache Multicast is done relatively elementary, the function is only the basic implementation (such as a simple hub, two single-NIC server, the multicast synchronization between each other is no problem), for some complex environment (such as multiple servers, multiple addresses on each server, especially the cluster, There is a cluster address with multiple physical machines, each physical machine with more than one virtual station sub-address, it is prone to problems.
2, the Peer way
Principle: Peer-to Ehcache requires each node to point to the other N-1 nodes.
3. JMS Message Mode
Principle: The core of this pattern is a message queue, each application node subscribes to a predefined theme, and when the node has an element update, it also publishes the update element to the topic. Each application server node obtains up-to-date data by listening for MQ, then updates its own Ehcache cache separately, Ehcache supports ACTIVEMQ by default, and we can implement similar KAFKA,RABBITMQ by customizing the components.
4. Cache Server Mode
Principle: This mode will exist in the master-slave node.

BUG: The cache is prone to inconsistent data issues,

V. What is the bottleneck of using Ehcache

1, Cache Drift (Drift): Each application node only manages its own cache, when updating a node, does not affect the other nodes, so the data may be out of sync.
2, Database bottlenecks: For single-instance application, the cache can protect the database read storm, but in the cluster environment, each application node must keep the data up-to-date, the more nodes, to maintain such a situation to the database overhead is also greater.

Vi. How to use Ehcache in practical work

In my actual work, I'm more of a level two cache with Redis as a ehcache.
The first way:

Note:
This way the Redis cache server is updated more synchronously with the local cache through the Ehcache timer of the application server, and the disadvantage is that each server has a different timing ehcache time, so the time for each server to refresh the latest cache is not the same, resulting in inconsistent data. The consistency requirement is not high and can be used.
The second way:

Note:
By introducing the MQ queue, the Ehcache of each application server listens to MQ messages synchronously, so that the quasi-synchronous update data can be achieved to a certain extent, through MQ push or pull, but because of the network speed between different servers, it is not fully achieve strong consistency. The same is true with distributed coordination notification components such as zookeeper, based on this principle.
Summarize:
1, the advantage of using level two cache is to reduce the network transport overhead of cached data, when the centralized cache fails, the local cache such as Ehcache can still support the normal use of the application, increasing the robustness of the program. In addition, a level two cache policy can be used to prevent the cache penetration problem to some extent.
2, according to the CAP principle we can know that if you want to use strong consistency cache (according to their business decisions), centralized caching is the best choice, such as (redis,memcached, etc.).

When can we use the Ehcache cache?

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.