Currently, cache is widely used in website architecture to provide the performance of Web applications. The following types of cache architectures exist for different websites.
(1) Single-host Cache
(2) Simple distributed cache
(3) cache cluster Using Replication
(4) cache cluster using hash
(5) high-performance, high-reliability cache Cluster
The following is a detailed analysis of the applicability and advantages and disadvantages of the above cache policies. Of course, there is no cache dogma, and some are just reference cases, specific analysis is required for specific applications.
(1) Single-host Cache
Standalone cache, that is, the Web application and the cache are the same application, that is, the simplest Cache Policy. The static hashmap and list used in the program can be regarded as the cache category. Typical standalone caches include Oscache and ehcache.
Oscache is a widely used high-performance J2EE cache framework. Oscache can be used in common cache solutions for any Java application. Oscache has the following features: Any object can be cached. You can cache some JSP pages or HTTP requests without any restrictions. Any Java object can be cached.
Ehcache comes from Hibernate and is a pure Java cache in the process. It has the following features: fast and simple. It acts as a pluggable cache for hibernate2.1, with minimal dependencies, comprehensive documentation and testing.
Using a single-host Cache Policy for Web applications can greatly improve the system throughput. I used Oscache IN A TELECOM web application to cache data at the page level on the home page and menu, and the data that is not queried through the "more" method, that is, the data that can be accessed directly on the page is cached and updated on a regular basis, which greatly improves the system performance, the database access traffic is greatly reduced, and the homepage can reach 1000 concurrency.
The read/write access of a Single-host cache has the highest performance and the lowest cost in all cache policies. It is very suitable when the data volume is small and the concurrency performance requirements are not high. The problem with the single-host cache is that the data size that can be cached is limited, and it and the application are deployed on the same server to compete with each other to consume system resources and cannot be expanded. In addition, the web access traffic increases, when the web requires cluster deployment, cache data of the same size must be stored in all clusters and cannot be shared with each other.
(2) Simple distributed cache
Simple distributed cache has two representative deployment methods.
1. single-instance memcached deployment
When talking about caching, memcached may be invisible. You can consider deploying a memcached server as a central cache server, multiple applications access the cached data through the memcached server in the form of clients, thus avoiding the disadvantages that the same data in the single-host cache solution needs to be cached repeatedly on multiple application servers.
2. Oscache and enhence distributed cache
Oscache and enhence can broadcast cached data using jgroups to automatically synchronize cached data of multiple applications, after an application updates the cache, the application automatically broadcasts the cached information to the cache of other applications. Other applications do not need to access the database again to load the data cache again.
The two simple distributed cache policies are relatively large compared with the single-host cache, especially when memcached is used. Because of the high performance of memcached cache, the application and Cache Server are separated, you can greatly increase the system throughput when deploying web applications in clusters. The distributed cache implemented by the Oscache and enhence modes is based on the standalone cache mode, but the cache data storage performance is optimized, and the basic limits are not changed.
(3) cache cluster Using Replication
When the following conditions are met, you can use the replication cache cluster policy:
1. The amount of data to be cached is not very large and does not exceed the limit of a single machine
2. High read cache Performance
3. Data changes in the cache are not frequently performed.
For Web applications that meet the preceding three requirements, we can use a replication-based Cache cluster to improve system performance. Generally, multiple cache instances are used as a cluster by using virtual IP addresses. This cluster is transparent to client applications. When a cache server goes down, it does not affect the client. When an application updates the cache, the cache instance notifies other cache servers in the cluster that the cache cluster automatically synchronizes cached data from each cache server.
(4) cache cluster using hash
When the following conditions are met, you can use the hash cache cluster policy:
1. The amount of data to be cached is extremely large.
2. High read cache Performance
3. single point of failure (spof) allowed