Redis has created a new way of storing data. When using redis, we don't have to focus on how to put elephants in the refrigerator in the face of monotonous databases, instead, redis uses flexible data structures and data operations to build different refrigerators for different elephants. Common redis Data Types
Reprint: http://fanshuyao.iteye.com/blog/2384074First, Redis:Https://github.com/MicrosoftArchive/redis/releases1, Redis-x64-3.2.100.msi for the installation version2, Redis-x64-3.2.100.zip for compression packageSecond, because I use the installation version, this issue is also the installation version of the problem1, after the installation of the directory2. Th
still be limited to 4 GB or less. If the instance is shared with other applications at the same time, a more efficient 64-bit redis instance may be required. In this case, switching to 32-bit is not desirable. No matter which method is used, redis dump files are compatible with each other between 32-bit and 64-bit. Therefore, if you need to reduce the memory usage, try to use 32-bit first, then switch to 6
Decrby command key minus the given decrement (decrement). Redis Append Command If the key already exists and is a string, the Append command appends value to the end of the key's original value. The Redis Hash Command command describes the Redis hdel command to delete one or more
Redis Learning Guide, redis
I. Introduction
Redis is an open-source log-type database written in ansi c language that supports Network, memory persistence, and high-performance key-value. It also provides APIs in multiple languages. Speaking of the Key-Value database NoSQL database, you can think of MongoDB.Similar to Memcached, Memcached supports more storage
memory. You can use slowlog reset to recycle memory. Slowlog-max-len 128
Redis configuration-latency monitoring
# Delay monitoring is disabled by default because it is basically not required, in milliseconds latency-monitor-threshold 0
Redis configuration-Event Notification
# Redis can notify Pub/Sub client about events in the key space # This feature document i
Definition of hash table:The basic idea of hash storage is to calculate the corresponding function value (hash address) with the keyword key as an independent variable, through a certain function relation (hashing function or hash function), take this value as the address of the data element, and deposit the data eleme
, return "null ". Zrevrank is sorted in ascending order.
Zincrby$ Redis-> zincrby ('key', increment, 'member ');If the element member already exists in the zset named key, the score of this element is added with increment; otherwise, this element is added to the set, and its score value is increment.
Zunion/zinterParametersKeyoutputArrayzsetkeysArrayweightsAggregatefunction either "sum", "min", or "Max": defines the behaviour to use on duplicate entri
The fifth chapter of the hashThe implementation of a hash table is often called a hash (hashing). Hashing is a technique used to perform insertions, deletions, and lookups with constant average time.There is a very important concept about hashing: hash function. The hash function is one of the key points of the
order.ZIncrBy$ Redis-> zIncrBy ('key', increment, 'member ');If the element member already exists in the zset named key, the score of this element is added with increment; otherwise, this element is added to the set, and its score value is increment.ZUnion/zInterParametersKeyOutputArrayZSetKeysArrayWeightsAggregateFunctionEither "SUM", "MIN", or "MAX": defines the behaviour to use on duplicate entries during the zUnion.Perform union and intersection
period$ Redis-> setex ('str', 10, 'bar'); // indicates that the storage is valid for 10 seconds.
// Setnx/msetnx is equivalent to the add operation and does not overwrite existing values.$ Redis-> setnx ('foo', 12); // true$ Redis-> setnx ('foo', 34); // false
// Getset operation, a variant of set. The result returns the value before replacement.$
: stores object information in sequence and has attributes used to cache the length of linked list. This function has good performance in inserting and deleting objects to avoid loops.
3. dictionary: the key-value storage method. The key storage is determined by hash value calculation. When the storage capacity is too large, the dictionary size will be re-allocated through rehash.
5. Skip table
5.1 Overview
A skip table is an ordered data structu
"SUM", "MIN", or "MAX": Defines the behaviour to use on duplicate entries during the zunion.
Sets and intersects the N Zset, and saves the final collection in Dstkeyn. For the score of each element in a collection, multiply the weight parameter for the aggregate operation before doing it. If weight is not provided, the default is 1. The default aggregate is sum, that is, the score of the elements in the result set is the value of the sum operation for all the corresponding elements of the colle
virtual memory.
Vm-pages: sets the total number of pages for file exchanges.
Vm-max-threads sets the number of threads simultaneously used by VM IO
Glueoutputbuf stores small output caches together
Hash-max-zipmap-entries sets the hash critical value
Activerehashing re-hash
**************************************** ***************************
Five data types: str
node, that is, the current node. The status of the entire cluster is fail.
4. Allocate hash slot
Through the above operations, we have planned three independent Redis nodes to the same cluster. Have we completed all the work of cluster construction now? None! By checking the cluster status in Figure 2, we can know that the current cluster status is still fail. At this time, the
; otherwise, the element is added to the collection and its score value is incrementZunion/zinterParametersKeyOutputArrayzsetkeysArrayweightsAggregateFunctionEither "SUM", "MIN", or "MAX": Defines the behaviour to use on duplicate entries during the zunion.Sets and intersects the N Zset, and saves the final collection in Dstkeyn. For the score of each element in a collection, multiply the weight parameter for the aggregate operation before doing it. If weight is not provided, the default is 1. T
/zinterParametersKeyOutputArrayzsetkeysArrayweightsAggregateFunctionEither "SUM", "MIN", or "MAX": Defines the behaviour to use on duplicate entries during the zunion.Sets and intersects the N Zset, and saves the final collection in Dstkeyn. For the score of each element in a collection, multiply the weight parameter for the aggregate operation before doing it. If weight is not provided, the default is 1. The default aggregate is sum, that is, the score of the elements in the result set is the v
/zinterParametersKeyOutputArrayzsetkeysArrayweightsAggregateFunctionEither "SUM", "MIN", or "MAX": Defines the behaviour to use on duplicate entries during the zunion.Sets and intersects the N Zset, and saves the final collection in Dstkeyn. For the score of each element in a collection, multiply the weight parameter for the aggregate operation before doing it. If weight is not provided, the default is 1. The default aggregate is sum, that is, the score of the elements in the result set is the v
is incrementZunion/zinterParametersKeyOutputArrayzsetkeysArrayweightsAggregateFunctionEither "SUM", "MIN", or "MAX": Defines the behaviour to use on duplicate entries during the zunion.Sets and intersects the N Zset, and saves the final collection in Dstkeyn. For the score of each element in a collection, multiply the weight parameter for the aggregate operation before doing it. If weight is not provided, the default is 1. The default aggregate is sum, that is, the score of the elements in the
of the elements in the result set as the minimum and maximum values for all the corresponding elements of the collection.Hash operationHset$redis-gt;hset (' h ', ' key1′, ' hello ');Add an element to a hash named H Key1-gt;helloHget$redis-gt;hget (' h ', ' key1′ ');Returns the value of the Key1 in the hash named H (he
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.