Monitoring Status
<?php
$mem = Memcache_connect ("192.168.88.88", 11211);
$stats = $mem->getstats ();
Var_dump ($stats);
We can easily drop the PHP extension to get the memcached state.
Print the above array, we can get a lot of information from it, here is not meaningful enumeration.
For these state information, we can simply look at three ways:
Space usage : A constant focus on cache space usage allows us to know when the cache system needs to be scaled up.
Cache Hit Ratio .
I /O traffic : Focus on the speed at which data items read and write bytes in memcached, which reflects its workload, and I can tell whether memcached is idle or busy. Cache Extension
For the capacity of the cache space, expansion means increasing the server physical memory, which may seem impractical, so we can only increase the new cache server to achieve the purpose of scaling.
192.168.88.88
192.168.99.99
Now that we have 2 memcached cache servers, we have a problem: how to distribute the cached data evenly across multiple cache servers.
We consider a partitioning method based on key, which is the horizontal partitioning of some databases. It is necessary to design a hashing algorithm that does not depend on the content of the data item, and distributes the key of all data items on multiple cache servers.
A simple and effective way is to "take the remainder" operation.
Before we take the remainder, we have to do some preparatory work to make the key into an integer and try to be unique. For example, the following key:
echo MD5 ("artical20180120.html"); CA01797073D94E12FF3C7B9EBE045DD1
//Gets is a 32-byte string and is also a hexadecimal long integer
//In order to reduce overhead, we take the string front 5 bytes ca017
Echo Hexdec (' ca017 '); 827415
//die 2
php-r "echo 827415%2;"//1
So we have the key become a unique integer, and completed the "take rest/modulo" operation.
The remainder is the number of our cache server, our 2 servers should be numbered from 0, then 1 represents the second server.
It seems a bit more complicated, we put the above operation in a "cache connector", we only need to tell the "cache connector" key, then select the cache server thing to please it.
function Memcache_connector ($key) {
$hosts = array ("192.168.88.88", "192.168.99.99");
$hosts _index = Hexdec (substr (MD5 ($key), 0,5))% count ($hosts);
$host = $hosts [$hosts _index];
Return Memcache_connect ($host, 11211);
}
Now when we access the cache, we only need to link to the cache server:
$mem = Memcache_connector ("artical20180120.html");
thinking:
When we extend the cache system, such as from 2 to 3, the hesitation partition algorithm changes (modulo 2 becomes modulo 3), which involves the problem of migrating cached data from one cache server to another. How to migrate it.
In fact, the migration between partitions should not be considered at all, because "caching" should be prepared to sacrifice at the necessary moment. So we have to understand that the introduction of distributed caches from the outset is a matter of self-extracting: caching is not persistent.