about memcached Enterprise Interview case explanation
1, memcached is what, what role?
A. memcached is an open-source, high-performance memory cache software, which is the memory meaning from the name, and the cache is the meaning of caching.
B. Role: Memcached through the pre-planned memory space, the temporary cache of data in the database, in order to reduce the business to the database directly high concurrent access, so as to improve the database access performance, speed up the site cluster dynamic application services capabilities.
2, memcached service in the Enterprise Cluster architecture application scenario
(1). Front-end cache application as a database
A. Full cache (easy), static cache
For example: commodity classification, as well as commodity information, can be placed in memory beforehand, and then provide data access, this is called preheating. Users can read only the memcached cache and not read the database when they access it.
B. Hotspot cache (Difficult)
Requires front-end program mates. Cache only hot-spot data, which is the data that is frequently accessed.
First preheat the underlying data and then update it dynamically. Read the cache first, if there is no corresponding data in the cache, then read the database, and then put the read data into the cache.
Special Instructions:
1. If you encounter the business of high concurrency such as e-commerce, it must be pre-warmed, or other ideas to achieve, for example: seconds to kill just get qualified, and not instantaneous seconds to kill the goods.
2. If the data is updated, the cache update is triggered and the user is prevented from expiring the data.
C. For persistent cache storage systems, such as Redis, you can replace the storage of a subset of databases. Some simple data business, voting, statistics, friends attention, commodity classification and so on.
(2). Shared storage as a session of the cluster.
http://oldboy.blog.51cto.com/2561410/1331316
http://oldboy.blog.51cto.com/2561410/1323468
3, memcached service in different enterprise business application scenario workflow
A. When the Web program needs to access the backend database to obtain data, it will give priority to access to the memcached memory cache, if there is data in the cache to directly get back to the front-end service and the user, if there is no data (no hit), and then by the program request the backend database server, after obtaining the corresponding data, In addition to return to the front-end service and user data, but also put the data into the memcached memory for caching, waiting for the next request to be accessed, memcached memory is always the shield of the database, thereby greatly reducing the database access pressure, improve the response speed of the entire site architecture, improve the user experience.
B. When a program updates, modifies or deletes data already in the database, it also sends a request to notify memcached that the old data for the same ID content that has been cached is invalidated, ensuring that the data in the memcache is consistent with the data in the database.
In the case of high concurrency, in addition to the notification memcached expired cache invalidation, but also through the relevant mechanism, so that before the user access to new data, through the program pre-updated data pushed to the memcached cache, which can reduce the database access pressure, Increases the hit rate of the cache in memcached.
C. There is a database plug-in can be written to update the database, automatically throw to memcached cache, itself does not cache.
4, memcached service distributed cluster How to realize?
Special note: memcached clusters and Web service clusters are different, and the sum of all memcached data is the data of the database. Each memcached is part of the data.
A. Terminal implementation
The program loads all memcached IP lists by hashing the key (consistent hash)
B. Load Balancer
By hashing the key (consistent hash).
Description: The purpose of the consistent hashing algorithm is not only to ensure that each object requests only one corresponding server, but also to minimize the amount of update redistribution in the node outage cache server.
==========================================================
5, memcached service characteristics and working principle is what?
A. All data is stored in memory, no persistent storage is designed, restart service, data is lost.
B. The nodes are independent of each other
C. Asynchronous I/O model, event notification mechanism based on libevent model
D. The existence of key/value in form
E. C/S schema architecture, written in language, with a total code of more than 2000 lines
F. Automatically use the LRU algorithm to delete stale cached data when the in-memory cached data capacity reaches the memory value set at startup
G. You can set the expiration time on the stored data so that after expiration, the data is automatically cleared and the service itself does not monitor the expiration, it is the time stamp of the key to see if it expires when it is accessed.
I. Memcache the memory allocation mechanism is to block the specific memory and then divide the blocks into a group.
6, briefly describe the principle of memcached memory management mechanism?
The full name of malloc is memory allocation, which is called dynamic RAM allocation, and when it is impossible to know the exact location of the memory, it is necessary to use the dynamic allocation memory when you want to bind the real memory space.
Early memcached memory management by malloc allocated memory, after the use of free to reclaim memory, this way is prone to memory fragmentation and reduce the operating system memory management efficiency. Aggravating the burden of the operating system memory manager, the worst case, will cause the operating system than the memcached process itself is slow, in order to solve the above problem, Slab allocator memory allocation mechanism was born.
Now memcached uses the slab allocation mechanism to allocate and manage memory.
The principle of Slab allocation mechanism is to divide the memory allocated to memcached into a specific length of memory block (chunk) according to the predetermined size, and then divide the same memory blocks into groups (chunks Slab Class), which will not be released. can be reused.
The meemcached server holds a list of idle chunk within slab, selects chunk based on the list, and caches the data in it. When data is deposited, the memcached is based on the size of the data received, Select the slab that best fits the data size to allocate a minimum memory block (chunk) that can save this data. For example: There are 100 bytes of data that will be allocated to the following 112 bytes of a block of memory, so that 12 bytes are wasted, this part of the space can not be used, this is slab A disadvantage of the allocator mechanism.
Slab allocator also has the effect of reusing allocated memory. In other words, the allocated memory is not freed, but reused.
Main terms of slab allocation:
Page: The memory space allocated to slab, which is 1MB by default, is divided into chunk according to the size of the slab after allocation to slab.
Chunk: The memory space or block of memory used to cache data.
Slab class: A collection or group of multiple chunk of a specific size.
memcached Slab allocator Memory management mechanism disadvantages:
Slab allocator solves the original memory fragmentation problem, but the new mechanism also brings a new problem to memcached, which is that the allocated memory cannot be effectively exploited because of the allocation of memory of a particular length. For example, by caching 100 bytes of data into a 128-byte chunk, the remaining 28 bytes are wasted.
The way to avoid wasting memory is to pre-calculate the size of the application to deposit data, or to store data of the same business type in a memcached server to ensure that the stored data is relatively uniform in size, thus reducing the waste of memory. Another option is to specify the "-F" parameter at startup to control the size difference between memory groups on a program. When using memcached in your app, you can usually not reset this parameter and use the default value of 1.25 for deployment. If you want to optimize the use of memcached memory, consider recalculating the expected average length of the data, and adjust this parameter to get the appropriate setting value.
-F <factor> Chunk size growth factor (default:1.25)!
Tuning the slab allocator memory management mechanism using growth factor
memcached specifies the growth factor factor at startup (via the F option), you can control the difference between slab on some program. The default value is 1.25. However, between this option, this factor was once fixed to 2, called the "Powers of 2" policy. Let's try using the previous settings to start memcached in verbose mode:
#memcached-F 2 VV
The following is the verbose output after startup:
Yes, starting with a 128-byte group, the size of the group increases to twice times the original, so the problem with this setting is that the difference between slab is large, and in some cases it is quite a waste of memory. Therefore, to minimize memory waste, the growth factor option is added.
To see the output when the default setting (f=1.25) is now:
7, memcached deletion principle and delete mechanism?
Memcached the main cache mechanism is the LRU (least recently used) algorithm, plus its own expiration failure. When you save data to memcached, you can specify how long the data can stay in the cache which is forever, or some time in the future. If Memcached's memory is not enough, the expired slabs will be replaced first, then the oldest unused slabs. In some cases (full cache), if you do not want to use the LRU algorithm, you can start memcached with the "-m" parameter so that the memcached will return an error message when the memory is exhausted.
-M return error on memory exhausted (rather than removing items)
8, memcached server and client installation deployment and use of testing
Memcached installation is relatively simple, many platforms support memcached, common are: Linux, FreeBSD, Solaris, Windows
Software Address:
memcached:http://www.danga.com/memcached/
libevent:http://monkey.org/~provos/libevent/
User installation Information: http://instance.iteye.com/blog/1691705
Install memcached before installing libevent, about libevent we have introduced in the previous article, first use wget download libevent:
Operation Command:
Tar zxf libevent-1.4.13-stable.tar.gz
CD libevent-1.4.13-stable
./configure
Make
Make install
Installing the memcached service side
Operation Process:
Tar XF memcached-1.4.13.tar.gz
CD memcached-1.4.13
./configure
Make
Make install
Tips:
Memcache-2.2.4.tgz <--Client
memcached-1.4.13.tar.gz <--Service Side
Start memcached
(1) Configure ld.so.conf path to prevent start memcached times wrong
# memcached--help
Memcached:error while loading shared libraries:libevent-1.4.so.2:cannot open Shared object file:no such file or direct Ory
echo "/usr/local/lib" >>/etc/ld.so.conf
Ldcoonfig
(2) Start memcached
# memcached-m 16m-p 11211-d-u root-c 8192
Parameter description:
-M #分配的内存大小
-P #port
-D #后台启动
-C #并发连接数
(3) Check the startup result:
Netstat-ntulp
Ps-ef | grep memcached
(4) Start multiple instances
Memcached-m 1m-p 11212-d-uroot-c 8192
(5) Write data check results
Adding data to memcached: key-value pairs
Key1-Values1
Key2-Values2
A. Writing via NC
# printf "Set key008 0 0 10\r\noldboy0987\r\n" |nc 127.0.0.1 11211 #存数据
Tip: The byte of the command is 10, followed by 10 characters. Otherwise, the add is unsuccessful.
# printf "Get key001\r\n" |nc 127.0.0.1 11211 #取数据
# printf "Delete key001\r\n" | NC 127.0.0.1 11211 #删除数据
B. Writing via Telnet
Escape character is ' ^] '.
Set User 0 0 6 #设置key
Oldboy #存的key值
STORED
Get user #取key值
VALUE User 0 6
Oldboy
END
Syntax for manipulating the memcached command:
Set Key 0 0 10
<command> <key> <flags> <exptime> <byte> \ r \ n
Installing the Memcache Client
wget http://pecl.php.net/get/memcache-2.2.5.tgz
Tar zxvf memcache-2.2.5.tgz
CD memcache-2.2.5
/usr/local/php/bin/phpize
./configure--enable-memcache--with-php-config=/usr/local/php/bin/php-config--with-zlib-dir
Make && make install
After the installation is complete, there will be a memcache.so file in the following directory:
/usr/local/php/lib/php/extensions/no-debug-non-zts-200906026/
Then modify the PHP php.ini file:
==> will extension_dir = "./"
==> modified to: Extension_dir = "/usr/local/php/lib/php/extensions/no-debug-non-zts-200906026/"
==> and add a line: extension=memcache.so
Then Apache or Nginx
==> Program Connection memcached test
Cat op_mem.php
<?php
$memcache = new Memcache; Create a Memcache object
$memcache->connect (' 127.0.0.1 ', 11212) or Die ("Could not Connect"); Connecting memcached servers
$memcache->set (' key001 ', ' oldboy001 '); Set a variable into memory, the name is key001 value is oldboy001
$memcache->set (' key002 ', ' oldboy002 '); Set a variable into memory, the name is key002 value is oldboy002
$get _value01= $memcache->get (' key001 '); Remove the key001 value from memory
$get _value02= $memcache->get (' key002 '); Remove the key002 value from memory
Echo $get _value02. "<br>";
echo $get _value01;
?>
9. How to realize session shared storage in a cluster?
Modify the php.ini file: in the [Session] section, the following content:
Session.save_handler = Memcache
Session.save_path = "tcp://10.0.0.7:11211"
10, how to get the status information of memcached service, for example: Hit ratio
========================================
11. What metrics should be monitored by nagios monitoring memcached
12, what is Redis, what is the role?
13. What is the difference between memcached and Redis?
About Memcached Enterprise Interview case explanation