As an application of NoSQL database, Redis is more efficient in response and hit rate. The project needs to use a centralized scalable caching framework, do a little research, even if redis, memcached existing efficiency differences (specific comparison of reference http://timyang.net/data/mcdb-tt-redis/), But in fact, can meet the needs of the current project, but Redis is still relatively coquettish, support linked lists and set operations, support the regular expression lookup key, the current project caching results are mostly linked list, if the list of new or modified data, Redis shows a great advantage (memcached can only reload the list, Redis can add or modify the list)
1, download Redis
Download Address Http://code.google.com/p/redis/downloads/list
Recommended download redis-1.2.6.tar.gz, before this version of the team has been successful installation of the Operation experience, redis-2.0.4.tar.gz this version of my installation can not operate after the cache data, specific reasons for follow-up
2. Installation Redis
After downloading extract the tar zxvf redis-1.2.6.tar.gz to any directory, such as/usr/local/redis-1.2.6
After decompression, enter the Redis directory
cd/usr/local/redis-1.2.6 make
Copy files
CP redis.conf/etc/ This file when the Redis boot configuration file
The cp redis-benchmark redis-cli redis-server/usr/bin/ #这个倒是很有用 so that it does not need to be added when executing./and can be executed anywhere
Set up a memory allocation policy (optional, set according to the actual server)
/proc/sys/vm/overcommit_memory
Optional values: 0, 1, 2.
0 indicates that the kernel will check if there is enough available memory to use the process, and if there is enough memory available, the memory request is allowed, otherwise the memory request fails and the error is returned to the application process.
1 indicates that the kernel allows all physical memory to be allocated regardless of the current state of memory.
2, which indicates that the kernel allows to allocate more memory than the sum of all physical memory and swap space
It is worth noting that the Redis in the dump data, will fork a child process, the theory of the children process memory and parent is the same, such as the memory used by parent is 8G, this time also allocate 8G of memory to the son, If the memory can not afford, will often cause the Redis server down machine or IO load too high, inefficient. So here the more optimized memory allocation policy should be set to 1 (indicating that the kernel allows all physical memory to be allocated regardless of current memory state)
Open Redis port, modify firewall configuration file
Vi/etc/sysconfig/iptables
Join Port Configuration
-A rh-firewall-1-input-m State--state new-m tcp-p TCP--dport 6379-j ACCEPT
Reload Rule
Service iptables Restart
3. Start Redis Service
[Root@architect redis-1.2.6]# pwd
/usr/local/redis-1.2.6
[Root@architect redis-1.2.6]# Redis.conf
To view the process, confirm that Redis has started
[Root@architect redis-1.2.6]# Ps-ef | grep redis
Root 401 29222 0 18:06 pts/3 00:00:00 grep redis
root 29258 1 0 16:23? 00:00:00 redis-server/etc/redis.conf
If the Redis service fails here, generally because the redis.conf file has a problem, it is recommended to check or find a usable profile to cover, avoid less detours, here, proposed, modify the redis.conf, set Redis process for the background daemon
# By default Redis does not run as a daemon. Use ' yes ' if you need it.
# that Redis'll write a PID file in/var/run/redis.pid when daemonized.
Daemonize Yes
4. Test Redis
[Root@architect redis-1.2.6]# redis-cli
redis> set name Songbin
OK
redis> get name
' Songbin '
5. Close Redis Service
REDIS-CLI shutdown
When the Redis service shuts down, the cached data is automatically dump to the hard disk, and the hard disk address is set by the configuration item Dbfilename Dump.rdb in redis.conf
Force backup data to disk, using the following command
redis-cli Save or redis-cli-p 6380 Save (Specify port)
The above is the entire content of this article, I hope to help you learn, but also hope that we support the cloud habitat community.