Caching memcached and caching policies

Source: Internet
Author: User
Tags memcached redis
1. What is memcached

Caching is a permanent memory database with memory that reads much faster than the program reads data on the disk. When designing a program, we often consider using caching to put frequently accessed data into memory, which can improve the speed of accessing data while reducing the pressure on the disk or the database.
memcached is a convenient implementation of the cache tool software, memcached Advantage lies in the following points:
1. Implement distributed caching (support for hot deployment) by hashcode the data to the server based on the cache server IP intelligence allocation.
2. Data that implements the least recently accessed priority is removed from the cache.
3. Quickly find the appropriate storage space to avoid the waste of memory.
4. Use key value to store data, easy to read and modify cache management.
5.socket Communication, caching server and application server separation. and
so on, memcached also contains many of its own advantages. The
problem with caching data that we face in the cache is not synchronized: for example, a cache server crashes so that the cached data on this server is all lost, and it needs to be in the configuration to remove the crashed server IP in time, which requires us to write more code for corresponding control.

2.memcached internal mechanism

Memcached in order to increase the storage speed of the data, when the Memcached service is installed, he automatically separates the memory assigned to memcached into many blocks of inconsistent size. Every time any size of data needs to be cached, memcached will automatically find the memory block that fits the size of the data, and then put the data in the block. This approach not only reduces the amount of memory wasted but also reduces the time allocated to memory.
if the memory inside the memcached is already in use and you need to add data to it, memcached will erase the data that is not used for the longest time in the cache. Each time the data
is added to the cache, the data will be taken in a valid time, and if the time is exceeded, the cached data is automatically invalidated.
memcached installed on the computer is a service default port 11211, we can establish communication with him through the socket. After establishing the communication, you can add the work of removing or modifying or deleting cached data to the cache by executing memcached instructions.

3.c# How to operate memcached

C # to establish a memcached connection, first through the socket established in 11211 of the communication, establish socketpool, and then through the socket to launch the remote command execution statement, where the use of sockets to establish a connection will need to set the maximum time for each connection established, If the maximum time is not set, then an exception occurs when a connection is established, so the machine is waiting for a long time to know that the connection timeout exception is thrown.
a thread pool and a link to the socket are used. And then through the socket for the corresponding link operation.
in fact, memcached as a service, he can be different languages and different platforms to establish a connection and execute commands.

4.memcached Internal operation instruction and noun explanation

After all languages have been established to connect to the Memcached service, the same set of instructions is executed by the memcached.

In Windows systems we can use the Telnet command to view memcached related operations such as:
stats telnet xxx.xxx.xxx.xxx 11211
input Stats command: Here the screen is blank, I can't see what I've entered, and I can see Memcached's running information after carriage return.
set key get 
key 
memcached client command
set
Add
replace
get
Delete
Enter stats command: Here the screen is blank, can not see the input content, enter after the memcached to see the operation of the relevant information. To
see the meaning of a noun:

Pid:memcached the process number in the server uptime:memcached the time after server startup, in seconds: current system time, version:memcached version number in per second Pointer_ Size: The pointer size of the host operating system for the server, typically 32 or Curr_items: Represents the number of all cached objects stored in the current cache Total_items: Represents the number of all objects stored by the system from the Memcached service startup to the current time. Includes objects that have been deleted bytes: Represents the storage space used by the system to store cached objects in bytes Curr_connections: Indicates the number of connections currently open total_connections: from the memcached service to the current time. Total number of connections that the system has opened Cmd_get: the number of times the query is cached, even if it is unsuccessful Cmd_set: the number of times to save the data, and of course only the number of successes saved here Get_hits: Indicates the number of times the data was successfully obtained. Get_misses: Indicates the number of times the fetch data failed. Evictions: The number of cached objects removed from the cache in order to free up space for new data items. For example, objects that are removed according to the LRU algorithm when the cache size is exceeded, and the total number of bytes that the server reads from the network from the Bytes_read:memcached bytes_written:memcached server is sent to the network in total bytes Limit_ Maximum number of bytes allowed for the maxbytes:memcached service cache threads: Total cache Hit ratio of the requested worker thread = Get_hits/cmd_get * 100%;

5. Good caching

Good caching requires the following:
1. Flexibility to specify different types of cache time.
2. Caching configuration and management can be done through the configuration file or the management site.
3. Flexibility to manage the caching of different users.
4. The caching server should be stable and be able to back up cached data in a timely manner.
5. Flexible management of cached data. The
good caching strategy I've come across here is to fill in an XML or JSON configuration file with the cached configuration.
For example:
<?xml version= ' 1.0 ' encoding= ' utf-8 '?>
<Root>
     <Item>
          <CacheTime>10000</CacheTime>
          <datatype>b2c</ datatype>
          <Enabled>True</Enabled>
     </Item>
     <Item>
          <cachetime >10000</CacheTime>
          <DataType>PHONE</DataType>
          <enabled>true</enabled >
      </Item>
 </Root>
A configuration above can be a good way to specify a type of cache time configuration, and then enable or disable whether a type is cached.

This XML configuration can also be more complex to meet the different user's caching time is actually not the same. After the development of the cache, the management of the cache should be flexible and diverse. Because it is often necessary to add or remove or modify some cached content in the operation dimension, it is very important to be flexible and easy to manage.

6. Cache Extensions

We have already learned a bit about caching, so we can use caching in those places in peacetime development. First, the data that users often read can be put into the cache. Second, the user's high frequency write data can also be slowed down by the existence and change speed. Some static file contents of the last Web site can also be placed.
user High-frequency Write data to the database in the way of caching, if the user writes to the database every time need to establish a connection with the database, and the data to write to the database is bound to reduce the overall operational efficiency, then we need to reduce the connection with the database and write to disk number of times, So you can specify the size of a cached content, if the cached data reaches a certain order of magnitude at a time to write data to the database, or the data in the cache will be taken out and then written to the database. I think the above two ways can be combined to reduce the resources that a program can use to create a connection to a database to write to disk.
The other caching services software has redis. Very good. Following several aspects:

In 1.Redis, not all data has been stored in memory, which is the biggest difference compared to memcached. 2.Redis not only supports simple k/v types of data, but also provides storage of data structures such as List,set,hash. 3.Redis supports data backup, that is, data backup in Master-slave mode. 4.Redis supports the persistence of data, can keep the data in memory on disk, restart can be loaded again to use. I personally think that the most essential difference is that redis in many ways has the characteristics of the database, or is a database system, and memcached is just a simple k/v cache












Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.