Memcached Cache Basic Concepts
Memcached is a set of distributed memory object caching systems.
It is used to cache database data in dynamic application system, reduce the access pressure of database, and achieve the purpose of improving the performance of website system.
Memcached is typically used in enterprise scenarios as a cache service for a database, and memcached accesses data by pre-allocating the specified memory space, so it is much faster than the database directly manipulating the disk, providing better performance than directly reading the database.
In addition, memcached is often used as a storage for session data sharing between cluster architecture node application servers.
1, memcached service in the application of the workflow:
Memcached is a memory cache that is often used to cache data in a database, which is cached in pre-allocated memcached-managed memory, and can be accessed through an API to access the cached data in memory. Memcached service in-memory cached data is like a huge hash table, with each piece of data in the form of a key-value pair.
Memcached by caching the data in a database that is often read, when the program needs to access the backend database to obtain data, it will give priority to the memcached memory cache, if there is data in the cache directly back to the front-end service application, if there is no data to be forwarded to the backend database server, The program service takes memcached memory cache without corresponding data, in addition to the data returned to the user, but also the data in memory cache, waiting for the next access, thereby greatly reducing the pressure on the database, improve the response speed of the entire site architecture, improve the user experience.
2, the memcached service in the large-scale site application
Almost all sites, when the traffic is increased, the first bottleneck in the entire site cluster architecture is the database role of the server and storage role of the server, in the work we try to push forward the user's request, that is, when the user requests data, the more close to the end of the user to return the data is better.
See author's Tens pv/ip high-scale high-performance concurrent Web site architecture: http://oldboy.blog.51cto.com/2561410/736710
3. memcached load balancing and distributed application scenarios:
[Distributed Applications 1]
Memcache support distributed, we can be modified on the application server program, it is better to support.
[Distributed Applications 2]
In the application server through the program and Url_hash, consistent hashing algorithm to access the Memcache service, all the memcached server's address pool can be easily configured in each program's configuration file.
[Distributed Applications 3]
The portal, such as Baidu, is responsible for requesting the backend cache service through a middleware agent.
[Distributed Applications 4]
You can use common Lvs,haproxy to do cache load balancing, compared with ordinary Web application services, the focus here is the scheduling algorithm, the cache will generally choose Url_hash and consistent hash algorithm.
3, the characteristics of memcached
Memcached is a high-concurrency, high-performance caching service with the following characteristics:
(1) Simple protocol
Memcached protocol implementation is relatively simple, using a text line-based protocol, can be accessed via Telnet direct Operation memcached service access data.
(2) Event handling based on Libevent
To put it simply, Libevent is a library of programs developed using C, which encapsulates the Kqueue,linux system's Epoll and other event processing functions as an interface to ensure good performance even if the number of connections on the server side increases.
(3) Built-in memory management method
Memcached has a way of managing its own memory, which is very efficient, and all the data is stored in the memcached built-in memory, and memcached uses the LRU algorithm to automatically delete unused cache data when the data is stored in full memory space. That is, the memory space for reusing outdated data.
(4) Distribution characteristics between non-communication memcached
Each memcached server does not communicate with each other, is independent of the access data, do not share any information. Through the design of the client, let memcached have distributed, can support massive cache and large-scale application.
Memcached Cache Basic Concepts