Redis Java client Jedis for connection pooling + simple load balancing

Source: Internet
Author: User
Tags allkeys lua save file set time syslog volatile

1. Download Redis_win_2.6.13.zip installation package

: Everyone go to Baidu Bar

2,redis_win_2.6.13.zip after the installation package decompression, into the directory where Redis-server.exe

In this directory, create a new profile: redis01.conf "Here the file name, not fixed", the file content is as follows:

#是否以后台进程运行daemonize  yes  #指定后台进程的pid文件写入位置pidfile  /var/run/redis.pid  #监听端口, default is 6379port  6379  #只接受以下绑定的IP请求bind  127.0.0.1  #设置unix套接字, default is empty, and do not listen through UNIX sockets # unixsocket / tmp/redis.sock# unixsocketperm 755  #客户端空闲多长时间, close the link. 0 means do not close timeout 5 # tcp keepalive.#  if the value is 0, when the link is lost, it is sent using so_keepalive tcp acks  to the client. #  This parameter has two functions: # 1. Detects a breakpoint. # 2. From the network intermediary device, is to keep the link #  on Linux, set the time is to send ACKs cycle. #  Note: The link will not be closed until the double set time is reached. On other cores, the cycle depends on the kernel settings. #  a reasonable value for 60stcp-keepalive 0 #  specifies the log level, and the following record information is decremented # debug for dev/test #  Verbose not debug so detailed # notice applies to the production line # warning only records very important information loglevel notice  #日志文件名称, If the stdout is output to the standard output, it does not generate a log if it is running in a background process Logfile c:/users/michael/desktop/file/work/data/redis/logs/redis.log  #  to enable the system logger, set the option to yes# syslog-enabled no #  to indicate the syslog identity # syslog-ident  redis #  Specifies the syslog device. Must be a user or one of the Local0 ~ local7 # syslog-facility local0  #设置数据库数目, the first database is numbered: 0databases 16 ############# #快照 ############ #####  #在什么条件下保存数据库到磁盘, conditions can be many, meet any one condition will be careful snapshot # in 900 seconds there is a key change save 900 1# within 300 seconds, there are 10 key changes save  300 10# has 10,000 key changes save 60 10000  #当持久化失败的时候 within 60 seconds, Whether to continue to provide service stop-writes-on-bgsave-error yes  #当写入磁盘时, whether to compress data using the LZF algorithm, default to yesrdbcompression yes # Whether to add CRC64 checksum to the end of each file--take time to keep it safe rdbchecksum yes  #磁盘上数据库的保存名称dbfilename  dump.rdb #  Redis working directory, the above database save file and aof log will be written to this directory dir c:/users/michael/desktop/file/work/data/redis/01/ ############## Synchronous #################  #主从复制 configured # slaveof <masterip> <masterport> # when native is slave Configure # masterauth <master-password> #  when the host requires password verification when slave and master lose the link, or are in the process of synchronizing. Whether to respond to a client request #  set to Yes to indicate a response #  set to No, return directly to "sync with master in progress" (Synchronizing with the primary server) slave-serve-stale-data yes #  sets whether slave is read-only. #  Note: This isMake slave set to read-only and not expose it in an untrusted network environment slave-read-only yes #  set slave the time interval to send ping to master #  repl-ping-slave-period 10 #  Set data transfer I/O, host data, ping response time-out, default 60s#  This time must be bigger than repl-ping-slave-period, or will constantly detect the timeout # repl-timeout 60 #  whether after sync slave  Disable tcp_nodelay on the socket? #  If you set the Yes,redis to send data to slave using a small amount of TCP packets and a small amount of bandwidth. #  but this will cause a delay on the slave side. If you use the default settings of the Linux kernel, it is about 40 milliseconds. #  If you set to No, then the research on the slave side will be reduced but the synchronization bandwidth will increase. #  default we are optimized for low latency. #  But if the traffic is particularly large or the master-slave server is far apart, setting to Yes is reasonable. repl-disable-tcp-nodelay no #  set slave priority, the default is 100#  when the primary server does not work correctly, the number of low first is promoted to the primary server, But 0 is disable select slave-priority 100 ############# #安全 ################# #  set the client connection password, Because Redis responds up to 100w times per second, passwords are particularly complex # requirepass foobared #  commands are renamed, or disabled. #  Rename command is an empty string you can disable some dangerous commands such as: Flushall Delete all data #  It is important to note that writing aof files or command aliases routed to slave may cause some problems # rename-command  CONFIG  ""  ############# #限制 ################# #  Set the maximum number of linked clients, which defaults to 10000. #  number of requests that are actually acceptableTo set the value minus 32, this 32 is the maximum amount of memory that Redis retains for internal file descriptors, which is especially useful when Redis is used as an LRU cache. # maxclients 10000 #  #  sets a value that is smaller than the value that the system can use #  because slave output cache also consumes memory # maxmemory <bytes>  #达到最大内存限制时 when the delete algorithm is enabled. What removal algorithm is used # volatile-lru   use the LRU algorithm to remove key# allkeys-lru ->  with outdated Peugeot to remove any key# using the LRU algorithm  volatile-random ->  randomly removes a key# allkeys-random ->   with an outdated Peugeot to randomly remove a key#  volatile-ttl ->  remove the most recently expired key# noeviction ->  does not delete the key, and when there is a write request, returns an error # The default setting is VOLATILE-LRU#&NBSP;MAXMEMORY-POLICY&NBSP;VOLATILE-LRU&NBSP;#&NBSP;LRU and the minimum TTL algorithm does not have an exact implementation #  In order to save memory select only one of the most recently used keys in a sample range, you can set this sample size # maxmemory-samples 3 ############# #AO模式 ############### ## # aof and RDB persistence can be enabled at the same time # redis startup will read the AoF file, aof file has a better persistence guarantee appendonly no # aof save name, By default, appendonly.aof# appendfilename appendonly.aof #  sets when to write append logs, and three mode # no: Indicates when the operating system decides when to write. Best performance, but lowest reliability # everysec: Indicates that writes are performed once per second. Compromise solution, recommended #&nbsP;always: Indicates that each disk is written to. The worst performance, some appendfsync everysec #  than the above security when AOF Sync policy is set to Alway or everysec#  When the background store process (background store or AOF log write) generates a lot of disk overhead #  Some Linux configurations cause Redis to be blocked because of Fsync () calls for a long time #  there are no patches yet. Even using a different thread for Fsync will block our synchronous write (2) call. #  to alleviate this problem, use the following options when running a bgsave or bgrewriteaof #  can prevent fsync () from being called in the main program, No-appendfsync-on-rewrite no  # aof Automatic override (merge command, reduce log size) #  when aof log size increases to a specific ratio, Redis calls bgrewriteaof automatically rewrite the log file #  principle:redis  Records the file size of the AoF file after the last rewrite. #  If you have just started, the aof size #  This base size is used to compare with the current size when you start the record. If the current size is larger than a specific ratio, the override is triggered. #  You also need to specify a minimum value that the AOF needs to be overridden, which avoids reaching the ratio. #  but the aof file is also very small in the case of rewriting the aof file. #  set to 0 disables automatic rewriting auto-aof-rewrite-percentage 100auto-aof-rewrite-min-size 64mb ############## The maximum execution time of a LUA script ################## lua script, in milliseconds #  time-out, and in log #  when a script is running longer than the maximum execution time #  only script  kill and  shutdown nosave Two commands can be used. The # script kill is used to stop a script that does not invoke a write command. # shutdown nosave is the only way to turn off the server in cases where the script's write command is executing #  the user does not want to wait for the script to end normally. #  The following option is set to 0 or negative will cancel the scriptLine time limit lua-time-limit 5000 ############# #慢查询 ################# # redis slow query logging over a set time query, And only the time to execute the command is logged #  does not log I/O operations, such as: interacting with clients, sending replies, etc. #  time Unit is subtle, 1000000 subtle  = 1  second #  set to negative will disable slow query logging, and set to 0 will log all query commands slowlog-log-slower-than  10000 #  log length is not limited, but it consumes memory. When the log length is exceeded, the oldest record is removed #  use the Slowlog reset command to reclaim memory slowlog-max-len 128 ############# #高级设置 ########## ##### #  when there are a small number of entries, the hash uses efficient memory data structures. The maximum entry cannot exceed the set threshold value. The #  "small" definition is as follows:hash-max-ziplist-entries 512hash-max-ziplist-value 64 #  and hash coding, Small lists are also encoded in special ways to save memory. The "small amount" setting is as follows the:list-max-ziplist-entries 512list-max-ziplist-value 64 #  collection uses special encodings to save memory only in the following cases #  --> collection all consists of a string of 64-bit signed 10 integers #  The following options set the size of this particular collection. set-max-intset-entries 512 #  Special encoding saves memory when the length and elements of an ordered set are set to the following numbers zset-max-ziplist-entries  The 128zset-max-ziplist-value 64 #  hash flush uses 1 milliseconds per 100 CPU milliseconds to help flush the main hash table (top-level key-value mapping tables). The #  redis hash table uses a lazy refresh mechanism, with more operations and more refreshes. #  If the server is idle, the refresh operation will not occur and more memory willConsumed by a hash table #  defaults to 10 primary dictionary refreshes per second, freeing up memory. #  If you have a hard-to-delay demand, the occasional 2-millisecond delay is intolerable. Set to no#  otherwise set to yesactiverehashing yes #  client output cache limit the client #  with a slow forced-disconnect read speed has three types of restrictions # normal  ->  is teahouse your client # slave  -> slave and  MONITOR# pubsub ->  The client has subscribed to at least one channel or mode #  client output cache limit syntax is as follows (time unit: seconds) # client-output-buffer-limit < category > < Mandatory restrictions > < Soft Limit > < soft time >#  force limit cache size, disconnect the link immediately. #  to soft limit, there will still be a soft time-size link time #  default normal client unrestricted, only after the request, the asynchronous client data request faster than it can read the data speed #  the subscription mode and the master-slave client and the default limit, because they accept push. #  both mandatory and soft limits can be set to zero to disable this feature client-output-buffer-limit normal 0 0  0client-output-buffer-limit slave 256mb 64mb 60client-output-buffer-limit pubsub  32mb 8mb 60 #  set the frequency of Redis background task execution, such as clearing expired key tasks. The #  setting range is 1 to 500 and defaults to 10. The larger the CPU consumption, the smaller the delay. #  recommended not to exceed 100hz 10 #  when the child process overrides the AoF file, the following options are turned on, and the aof file is synchronized every time 32M data is generated. #  This helps to write files to disk faster to avoid delay # aof-rewrite-incremental-fsync yes ############# #包含 #################  #引入标准模板 # include /path/to/other.conf 

3. Start Redis Service

Execute cmd command, enter, Redis install directory

Execution:redis-server.exe redis01.conf

4. Test Redis Service

Enter the Redis installation directory and perform the following command test

Redis-cli.exe-h 127.0.0.1-p 6379

Set TestKey 123

Get TestKey

The next line is the console output

"123"

---This test is successful


5. Start another Redis service with the steps above

Special Note: You need to build a configuration file: redis02.conf

You can copy the contents of the redis01.conf, change the port in the contents of the file to: 6380, "6379" as the default port number

Open another CMD window, enter the Redis installation directory, execute:redis-server.exe redis02.conf

6. Jedis Client Code

import java.util.arraylist;import java.util.list;import redis.clients.jedis.jedispoolconfig; import redis.clients.jedis.jedisshardinfo;import redis.clients.jedis.shardedjedis;import  redis.clients.jedis.shardedjedispool;public class maintest {    /**      *  @param  args     */    public  static void main (String[] args)  {         List<jedisshardinfo> shards = new arraylist<jedisshardinfo> ();         shards.add (New jedisshardinfo ("127.0.0.1",  6379));         shards.add (New jedisshardinfo ("127.0.0.1",  6380));         shardedjedispool sjp = new shardedjedispool (new  Jedispoolconfig (),  shards); &nBsp;       shardedjedis shardclient = sjp.getresource ();         try {             shardclient.set ("A",  "123");             shardclient.set ("B",  "234");             shardclient.set ("C",  "345");             try {                 system.out.println (Shardclient.get ("A"));             } catch  (exception e)  {                 e.printstacktrace ();             }             try {                 system.out.println (ShardClient.get ("B" ));            } catch  (Exception e )  {                 E.printstacktrace ();            }             try {                 system.out.println (ShardClient.get ("C"));             } catch  (exception e)  {                  E.printstacktrace ();     &Nbsp;       }        } catch   (exception e)  {             E.printstacktrace ();        } finally {             sjp.returnresource (shardclient);         }    }}


7. OK

Redis Java client Jedis for connection pooling + simple load balancing

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.