Distributed caching those things _ large web

Source: Internet
Author: User
Tags memcached

In the previous articles, from the actual combat point of view, explained the application of memcached, disaster tolerance, monitoring and so on. But lacks to the theory explanation and the original rationality analysis. This article will introduce from the theory angle, lets everybody from macroscopic to "the distributed cache, the NoSQL" and so on the technology to have the understanding, in order to further study and use. In building large-scale Web applications, caching technology can be said to be necessary, the need for learning is self-evident.

Overview of Distributed caching

1.1 Characteristics of distributed caching distributed caching has the following characteristics:
1 High performance: When traditional databases face large-scale data access, disk I/O often becomes a performance bottleneck, resulting in excessive response latency. The distributed cache stores the high speed memory as the storage medium of the data object, and the data is stored in key/value form, and it is ideal for the DRAM-level reading and writing performance.
2 Dynamic Extensibility: Support elastic expansion, provide predictable performance and scalability by dynamically increasing or reducing the data access load that the node should respond to change, and maximizing resource utilization;
3 High Availability: Availability includes both data availability and service availability. High availability based on redundancy mechanism, no single point of failure (failure), support for automatic discovery of faults, and transparent implementation of failover No caching service interruption or data loss due to server failure. Automatically balance data partitioning when dynamically expanding, while ensuring the continuous availability of caching services;
4 Ease of Use: Provide a single view of data and management; The API interface is simple and has nothing to do with topology, the dynamic expansion or failure recovery without human configuration, automatic selection of backup nodes, most caching system provides a graphical management console, easy to unified maintenance;
5) Distributed Code Execution (distributed code Execution): Transfer the task code to the data nodes to execute in parallel, and the client aggregation returns the result, thus effectively avoiding the movement and transmission of the cached data. Latest Java Data grid specification JSR-347 has added distributed code execution and Map/reduce API support to mainstream distributed cache products such as IBM WebSphere EXtreme scale,vmware gemfire,gigaspaces XAP and Red Hat Infinispan also supports this new programming model.
1.2 Typical application Scenarios
Typical scenarios for distributed caching can be grouped into the following categories:
1 page caching. Used to cache content fragments of Web pages, including HTML, CSS, and pictures, etc., used in social networking sites and more;
2 Application Object caching. The caching system serves as a level two cache of ORM framework to reduce the load pressure of the database and accelerate the application access;
3 state cache. The cache includes session state and state data when applied horizontally, such data is generally difficult to recover, high availability requirements, and many applications in highly available clusters;
4) Parallel processing. Usually involves a large number of intermediate calculation results need to be shared;
5 event handling. Distributed caching provides continuous query (continuous query) processing technology for event flow to meet real-time demand;
6 limit transaction processing. Distributed caching provides a high throughput and low latency solution for transactional applications, supports high concurrent transaction request processing, and is applied to railways, financial services and telecommunications.
1.3 Development of distributed caching
Distributed caching has undergone several stages of development, from the initial local cache to the resilient cache platform to the flexible application platform, with the goal of developing a better distributed system (shown in the following illustration).

1 Local cache: The data is stored in the memory space where the application code resides. The advantage is that it can provide fast data access; The disadvantage is that the data cannot be distributed and the fault-tolerant processing is not possible. Typical, such as cache4j;


2 Distributed Caching System: The data is distributed between the fixed number of cluster nodes. The advantage is that the cache capacity can be extended (static expansion), the disadvantage is that the expansion process requires a large number of configurations, no fault-tolerant mechanism. Typical, such as memcached;


3) Resilient caching platform: data distributed among cluster nodes, and high availability based on redundancy mechanism. The advantage is that it can be dynamically extended with fault tolerance; The disadvantage is that replicating backups can have a certain impact on system performance. Typical, such as Windows Appfabric Caching;


4 Flexible Application Platform: The flexible application platform represents the future development direction of distributed caching system in cloud environment. Simply put, the flexible application platform is a combination of resilient caching and code execution, and transferring business logic code to the node where the data resides can greatly reduce data transmission overhead, Improve system performance. Typically, such as Gigaspaces XAP. The

1.4 distributed Cache and NoSQL
NoSQL, also known as not-only SQL, refer mainly to the relational, distributed, and horizontally extensible database design patterns. NoSQL has abandoned the strict transactional consistency and paradigm constraint of traditional relational database, and adopted the weak consistency model. Compared with NoSQL system, traditional database is difficult to meet the storage demand of application data in cloud environment, which is embodied in the following 3 aspects:
1) According to Cap theory, consistency ( consistency), availability (availability), and partition fault tolerance (partition tolerance) These 3 elements meet up to two at the same time. For a large number of Web applications deployed in the cloud platform, Data availability and partitioning fault tolerance are usually higher priority, so you typically choose to loosen the consistency constraint appropriately. The transaction consistency requirement of traditional database restricts the realization of its transverse scaling and high available technology;
2) Traditional databases are difficult to adapt to new data storage access patterns. There are a large number of semi-structured data in Web 2.0 sites and cloud platforms, such as user session data, time-sensitive transactional data, computationally intensive task data, etc., which are more suitable for storage in key/value form, and do not require the complex query and management functions provided by RDBMS;
3) NoSQL provides low latency reading and writing speed and supports horizontal scaling, which is critical for cloud platforms with massive data access requests. Traditional relational data cannot provide the same performance, while the memory database is limited and does not have scalability. Distributed caching as a NoSQL An important form of implementation that provides highly available state storage and scalable application acceleration services for the cloud platform, there is no clear boundary between the other NoSQL systems. Application access and system failures are unpredictable in the platform, in order to better meet these challenges, the application software in the framework of the general use of stateless design, A large amount of state information is no longer managed by a component, container, or platform, but directly to the
Distributed caching service or NOSQL system paid to the backend.
1.5 distributed caching and extreme transaction processing

With the further development of cloud computing and WEB 2.0, many enterprises or organizations often face unprecedented requirements: Millions concurrent user access, thousands of concurrent transactions per second, flexible elasticity and scalability, low latency, and 7x24x365 availability. Traditional transaction-type applications face concurrent transaction processing with limited scale , there is a limit transaction processing application, the typical railway ticketing system. In Wikipedia's view, limit transactions are transactions with more than 500 transactions per second or more than 10 000 concurrent accesses. Gartner defines extreme transaction processing (extreme transactionprocessing, abbreviated XTP) as an application model for the development, deployment, management, and maintenance of transactional applications, characterized by extreme requirements for performance, scalability, availability, manageability, and so on . Gartner predicts in its report that the limit transaction processing application scale will increase from 2005 's 10% to 2010 's 20%, the Limit transaction processing technology is the next 5 years ~10 hot spot technology. The introduction of extreme transaction processing has undoubtedly brought new challenges to the traditional Web three-tier architecture. How to be in a cheap, the standardized hardware and software platform provides good support for High-volume, business-critical transaction applications. As a key XTP technology, distributed caching can provide high throughput and low latency technology solutions for transactional applications. Delay Write (Write-behind) The mechanism provides shorter response times while significantly reducing transaction load on the database, and a phased event-driven architecture (staged Event-driven architecture) can support large-scale, highly concurrent transaction processing requests. In addition, Distributed caching manages transactions in memory and provides consistency assurance of data, using data replication technology to achieve high availability, with better scalability and performance combination.



Related articles recommended

On the selection and use of NoSQL

Memcached single point failure and load balancing

memcached Performance Monitoring

In Windows. NET platform to use the memcached

A popular explanation of "cluster and load Balancing"

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.