如何高效地向Redis寫入大量的資料

來源:互聯網
上載者:User

如何高效地向Redis寫入大量的資料

最近有個哥們在群裡問,有一個日誌,裡面存的是IP地址(一行一個),如何將這些IP快速匯入到Redis中。

我剛開始的建議是Shell+redis用戶端。

今天,查看Redis官檔,發現文檔的首頁部分(http://www.redis.io/documentation)有一個專門的主題是講述“Redis Mass Insertion”的,才知道自己的建議很low。

官方給出的理由如下:

Using a normal Redis client to perform mass insertion is not a good idea for a few reasons: the naive approach of sending one command after the other is slow because you have to pay for the round trip time for every command. It is possible to use pipelining, but for mass insertion of many records you need to write new commands while you read replies at the same time to make sure you are inserting as fast as possible.

Only a small percentage of clients support non-blocking I/O, and not all the clients are able to parse the replies in an efficient way in order to maximize throughput. For all this reasons the preferred way to mass import data into Redis is to generate a text file containing the Redis protocol, in raw format, in order to call the commands needed to insert the required data.

大意是:

1> 每個redis用戶端命令之間有往返時延。

2> 只要一部分用戶端支援非阻塞I/O。

個人理解是,redis命令從執行到結果返回,有一定的時延,即便採用多個redis客戶單並發插入,也很難提高輸送量,因為,只有非阻塞I/O只能針對有限個串連操作。

那麼如何高效的寫入呢?

官方在2.6版本推出了一個新的功能-pipe mode,即將支援Redis協議的文字檔直接通過pipe匯入到服務端。

說來拗口,具體實現步驟如下:

1. 建立一個文字檔,包含redis命令

SET Key0 Value0SET Key1 Value1...SET KeyN ValueN

如果有了未經處理資料,其實構造這個檔案並不難,譬如shell,python都可以

2. 將這些命令轉化成Redis Protocol。

因為Redis管道功能支援的是Redis Protocol,而不是直接的Redis命令。

如何轉化,可參考後面的指令碼。

3. 利用管道插入

cat data.txt | redis-cli --pipe

Shell VS Redis pipe

下面通過測試來具體看看Shell大量匯入和Redis pipe之間的效率。

測試思路:分別通過shell指令碼和Redis pipe向資料庫中插入10萬相同資料,查看各自所花費的時間。

Shell

指令碼如下:

#!/bin/bashfor ((i=0;i<100000;i++))doecho -en "helloworld" | redis-cli -x set name$i >>redis.logdone

每次寫入的值都是helloworld,但鍵不同,name0,name1...name99999。

Redis pipe

Redis pipe會稍微麻煩一點

1> 首先構造redis命令的文字檔

在這裡,我選用了python

#!/usr/bin/pythonfor i in range(100000):    print 'set name'+str(i),'helloworld'

# python 1.py > redis_commands.txt

# head -2 redis_commands.txt 

set name0 helloworldset name1 helloworld

2> 將這些命令轉化成Redis Protocol

在這裡,我利用了github上一個shell指令碼,

#!/bin/bashwhile read CMD; do  # each command begins with *{number arguments in command}\r\n  XS=($CMD); printf "*${#XS[@]}\r\n"  # for each argument, we append ${length}\r\n{argument}\r\n  for X in $CMD; do printf "\$${#X}\r\n$X\r\n"; donedone < redis_commands.txt
 

# sh 20.sh > redis_data.txt

# head -7 redis_data.txt 

*3$3set$5name0$10helloworld

至此,資料構造完畢。

測試結果

如下:

時間消耗完全不是一個量級的。

最後,來看看pipe的實現原理,

  • redis-cli --pipe tries to send data as fast as possible to the server.
  • At the same time it reads data when available, trying to parse it.
  • Once there is no more data to read from stdin, it sends a special ECHO command with a random 20 bytes string: we are sure this is the latest command sent, and we are sure we can match the reply checking if we receive the same 20 bytes as a bulk reply.
  • Once this special final command is sent, the code receiving replies starts to match replies with this 20 bytes. When the matching reply is reached it can exit with success.

即它會儘可能快的將資料發送到Redis服務端,並儘可能快的讀取並解析資料檔案中的內容,一旦資料檔案中的內容讀取完了,它會發送一個帶有20個位元組的字串的echo命令,Redis服務端即根據此命令來確認資料已插入完畢。

總結:

後續有童鞋好奇,構造redis命令的時間和將命令轉化為protocol的時間,這裡一併貼下:

[root@mysql-server1 ~]# time python 1.py > redis_commands.txtreal    0m0.110suser    0m0.070ssys    0m0.040s[root@mysql-server1 ~]# time sh 20.sh > redis_data.txtreal    0m7.112suser    0m5.861ssys    0m1.255s

下面關於Redis的文章您也可能喜歡,不妨參考下:

Ubuntu 14.04下Redis安裝及簡單測試

Redis主從複製基本配置

Redis叢集明細文檔

Ubuntu 12.10下安裝Redis(圖文詳解)+ Jedis串連Redis

Redis系列-安裝部署維護篇

CentOS 6.3安裝Redis

Redis安裝部署學習筆記

Redis設定檔redis.conf 詳解

Redis 的詳細介紹:請點這裡
Redis 的:請點這裡

本文永久更新連結地址:

相關文章

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.