這是一個建立於 的文章,其中的資訊可能已經有所發展或是發生改變。
元旦放假的第一天,在家沒事幹,用golang實現了一下mapreduce的單進程版本,github地址。處理對大檔案統計最高頻的10個單詞,因為功能比較簡單,所以設計沒有解耦合。
本文先對mapreduce大體概念進行介紹,然後結合代碼介紹一下,如果接下來幾天有空,我會實現一下分布式高可用的mapreduce版本。
1. Mapreduce大體架構
是論文中mapreduce的大體架構。總的來說Mapreduce的思想就是分治思想:對資料進行分區,然後用mapper進行處理,以key-value形式輸出中間檔案;然後用reducer進行對mapper輸出的中間檔案進行合并:將key一致的合到一塊,並輸出結果檔案;如果有需要,採用Combiner進行最後的合并。
歸納來說主要分為5部分:使用者程式、Master、Mapper、Reducer、Combiner(未給出)。
- 使用者程式。使用者程式主要對輸入資料進行分割,制定Mapper、Reducer、Combiner的代碼。
- Master:中控系統。控制分發Mapper、Reduer的個數,比如產生m個進程處理Mapper,n個進程處理Reducer。其實對Master來說,Mapper和Reduer都屬於worker,只不過跑的程式不一樣,Mapper跑使用者輸入的map代碼,Reduer跑使用者輸入的reduce代碼。Master還作為管道負責中間路徑傳遞,比如將Mapper產生的中間檔案傳遞給Reduer,將Reduer產生的結果檔案返回,或者傳遞給Combiner(如果有需要的話)。由於Master是單點,效能瓶頸,所以可以做叢集:主備模式或者分布式模式。可以用zookeeper進行選主,用一些訊息中介軟體進行資料同步。Master還可以進行一些策略處理:比如某個Worker執行時間特別長,很有可能卡住了,對分配給該Worker的資料重新分配給別的Worker執行,當然需要對多份資料返回去重處理。
- Mapper:負責將輸入資料切成key-value格式。Mapper處理完後,將中間檔案的路徑告知Master,Master獲悉後傳遞給Reduer進行後續處理。如果Mapper未處理完,或者已經處理完但是Reduer未讀完其中間輸出檔案,分配給該Mapper的輸入將重新被別的Mapper執行。
- Reducer: 接受Master發送的Mapper輸出檔案的訊息,RPC讀取檔案並處理,並輸出結果檔案。n個Reduer將產生n個輸出檔案。
- Combiner: 做最後的歸併處理,通常不需要。
總的來說,架構不複雜。組件間通訊用啥都可以,比如RPC、HTTP或者私人協議等。
2. 實現代碼介紹
該版本代碼實現了單機單進程版本,Mapper、Reducer和Combiner的實現用協程goroutine實現,通訊採用channel。代碼寫的比較隨意,沒有解耦合。
- 功能:統計給定檔案中出現的最高頻的10個單詞
- 輸入:大檔案
- 輸出:最高頻的10個單詞
- 實現:5個Mapper協程、2個Reducer、1個Combiner。
為了方便起見,Combiner對最高頻的10個單詞進行堆排序處理,按規範來說應該放在使用者程式處理。
檔案目錄如下,其中bin檔案夾下的big_input_file.txt
為輸入檔案,可以調用generate
下的main檔案產生,caller檔案為入口的使用者程式,master目錄下分別存放master、mapper、reducer、combiner代碼:
.├── README.md├── bin│ └── file-store│ └── big_input_file.txt└── src ├── caller │ └── main.go ├── generate │ └── main.go └── master ├── combiner.go ├── mapper.go ├── master.go └── reducer.go6 directories, 8 files
2.1 caller
使用者程式,讀入檔案並按固定行數進行劃分;然後調用master.Handle
進行處理。
package mainimport ( "os" "path" "path/filepath" "bufio" "strconv" "master" "github.com/vinllen/go-logger/logger")const ( LIMIT int = 10000 // the limit line of every file)func main() { curDir, err := filepath.Abs(filepath.Dir(os.Args[0])) if err != nil { logger.Error("Read path error: ", err.Error()) return } fileDir := path.Join(curDir, "file-store") _ = os.Mkdir(fileDir, os.ModePerm) // 1. read file filename := "big_input_file.txt" inputFile, err := os.Open(path.Join(fileDir, filename)) if err != nil { logger.Error("Read inputFile error: ", err.Error()) return } defer inputFile.Close() // 2. split inputFile into several pieces that every piece hold 100,000 lines filePieceArr := []string{} scanner := bufio.NewScanner(inputFile) piece := 1Outter: for { outputFilename := "input_piece_" + strconv.Itoa(piece) outputFilePos := path.Join(fileDir, outputFilename) filePieceArr = append(filePieceArr, outputFilePos) outputFile, err := os.Create(outputFilePos) if err != nil { logger.Error("Split inputFile error: ", err.Error()) continue } defer outputFile.Close() for cnt := 0; cnt < LIMIT; cnt++ { if !scanner.Scan() { break Outter } _, err := outputFile.WriteString(scanner.Text() + "\n") if err != nil { logger.Error("Split inputFile writting error: ", err.Error()) return } } piece++ } // 3. pass to master res := master.Handle(filePieceArr, fileDir) logger.Warn(res)}
2.2 master
Master程式,依次產生Combiner、Reducer、Mapper,處理訊息中轉,輸出最後結果。
package masterimport ( "github.com/vinllen/go-logger/logger")var ( MapChanIn chan MapInput // channel produced by master while consumed by mapper MapChanOut chan string // channel produced by mapper while consumed by master ReduceChanIn chan string // channel produced by master while consumed by reducer ReduceChanOut chan string // channel produced by reducer while consumed by master CombineChanIn chan string // channel produced by master while consumed by combiner CombineChanOut chan []Item // channel produced by combiner while consumed by master)func Handle(inputArr []string, fileDir string) []Item { logger.Info("handle called") const( mapperNumber int = 5 reducerNumber int = 2 ) MapChanIn = make(chan MapInput) MapChanOut = make(chan string) ReduceChanIn = make(chan string) ReduceChanOut = make(chan string) CombineChanIn = make(chan string) CombineChanOut = make(chan []Item) reduceJobNum := len(inputArr) combineJobNum := reducerNumber // start combiner go combiner() // start reducer for i := 1; i <= reducerNumber; i++ { go reducer(i, fileDir) } // start mapper for i := 1; i <= mapperNumber; i++ { go mapper(i, fileDir) } go func() { for i, v := range(inputArr) { MapChanIn <- MapInput{ Filename: v, Nr: i + 1, } // pass job to mapper } close(MapChanIn) // close map input channel when no more job }() var res []Itemoutter: for { select { case v := <- MapChanOut: go func() { ReduceChanIn <- v reduceJobNum-- if reduceJobNum <= 0 { close(ReduceChanIn) } }() case v := <- ReduceChanOut: go func() { CombineChanIn <- v combineJobNum-- if combineJobNum <= 0 { close(CombineChanIn) } }() case v := <- CombineChanOut: res = v break outter } } close(MapChanOut) close(ReduceChanOut) close(CombineChanOut) return res}
2.3 mapper
Mapper程式,讀入並按key-value格式產生中間檔案,告知Master。
package masterimport ( "fmt" "path" "os" "bufio" "strconv" "github.com/vinllen/go-logger/logger")type MapInput struct { Filename string Nr int}func mapper(nr int, fileDir string) { for { val, ok := <- MapChanIn // val: filename if !ok { // channel close break } inputFilename := val.Filename nr := val.Nr file, err := os.Open(inputFilename) if err != nil { errMsg := fmt.Sprintf("Read file(%s) error in mapper(%d)", inputFilename, nr) logger.Error(errMsg) MapChanOut <- "" continue } mp := make(map[string]int) scanner := bufio.NewScanner(file) scanner.Split(bufio.ScanWords) for scanner.Scan() { str := scanner.Text() //logger.Info(str) mp[str]++ } outputFilename := path.Join(fileDir, "mapper-output-" + strconv.Itoa(nr)) outputFileHandler, err := os.Create(outputFilename) if err != nil { errMsg := fmt.Sprintf("Write file(%s) error in mapper(%d)", outputFilename, nr) logger.Error(errMsg) } else { for k, v := range mp { str := fmt.Sprintf("%s %d\n", k, v) outputFileHandler.WriteString(str) } outputFileHandler.Close() } MapChanOut <- outputFilename }}
2.4 reducer
Reducer程式,讀入Master傳遞過來的中間檔案並歸併。
package masterimport ( "fmt" "bufio" "os" "strconv" "path" "strings" "github.com/vinllen/go-logger/logger")func reducer(nr int, fileDir string) { mp := make(map[string]int) // store the frequence of words // read file and do reduce for { val, ok := <- ReduceChanIn if !ok { break } logger.Debug("reducer called: ", nr) file, err := os.Open(val) if err != nil { errMsg := fmt.Sprintf("Read file(%s) error in reducer", val) logger.Error(errMsg) continue } scanner := bufio.NewScanner(file) for scanner.Scan() { str := scanner.Text() arr := strings.Split(str, " ") if len(arr) != 2 { errMsg := fmt.Sprintf("Read file(%s) error that len of line(%s) != 2(%d) in reducer", val, str, len(arr)) logger.Warn(errMsg) continue } v, err := strconv.Atoi(arr[1]) if err != nil { errMsg := fmt.Sprintf("Read file(%s) error that line(%s) parse error in reduer", val, str) logger.Warn(errMsg) continue } mp[arr[0]] += v } if err := scanner.Err(); err != nil { logger.Error("reducer: reading standard input:", err) } file.Close() } outputFilename := path.Join(fileDir, "reduce-output-" + strconv.Itoa(nr)) outputFileHandler, err := os.Create(outputFilename) if err != nil { errMsg := fmt.Sprintf("Write file(%s) error in reducer(%d)", outputFilename, nr) logger.Error(errMsg) } else { for k, v := range mp { str := fmt.Sprintf("%s %d\n", k, v) outputFileHandler.WriteString(str) } outputFileHandler.Close() } ReduceChanOut <- outputFilename}
2.5 combiner
Combiner程式,讀入Master傳遞過來的Reducer結果檔案並歸併成一個,然後堆排序輸出最高頻的10個詞語。
package masterimport ( "fmt" "strings" "bufio" "os" "container/heap" "strconv" "github.com/vinllen/go-logger/logger")type Item struct { key string val int}type PriorityQueue []*Itemfunc (pq PriorityQueue) Len() int { return len(pq)}func (pq PriorityQueue) Less(i, j int) bool { return pq[i].val > pq[j].val}func (pq PriorityQueue) Swap(i, j int) { pq[i], pq[j] = pq[j], pq[i]}func (pq *PriorityQueue) Push(x interface{}) { item := x.(*Item) *pq = append(*pq, item)}func (pq *PriorityQueue) Pop() interface{} { old := *pq n := len(old) item := old[n - 1] *pq = old[0 : n - 1] return item}func combiner() { mp := make(map[string]int) // store the frequence of words // read file and do combine for { val, ok := <- CombineChanIn if !ok { break } logger.Debug("combiner called") file, err := os.Open(val) if err != nil { errMsg := fmt.Sprintf("Read file(%s) error in combiner", val) logger.Error(errMsg) continue } scanner := bufio.NewScanner(file) for scanner.Scan() { str := scanner.Text() arr := strings.Split(str, " ") if len(arr) != 2 { errMsg := fmt.Sprintf("Read file(%s) error that len of line != 2(%s) in combiner", val, str) logger.Warn(errMsg) continue } v, err := strconv.Atoi(arr[1]) if err != nil { errMsg := fmt.Sprintf("Read file(%s) error that line(%s) parse error in combiner", val, str) logger.Warn(errMsg) continue } mp[arr[0]] += v } file.Close() } // heap sort // pq := make(PriorityQueue, len(mp)) pq := make(PriorityQueue, 0) heap.Init(&pq) for k, v := range mp { node := &Item { key: k, val: v, } // logger.Debug(k, v) heap.Push(&pq, node) } res := []Item{} for i := 0; i < 10 && pq.Len() > 0; i++ { node := heap.Pop(&pq).(*Item) res = append(res, *node) } CombineChanOut <- res}
3. 總結
不足以及未實現之處:
- 各模組間耦合性高
- master單點故障未擴充
- 未採用多進程實現,進程間採用RPC通訊
- 未實現單個Workder時間過長,另起Worker執行任務的代碼。
接下來要是有空,我會實現分布式高可用的代碼,模組間採用RPC通訊。
說明
轉載請註明出處:http://vinllen.com/golangshi-xian-mapreducedan-jin-cheng-ban-ben/
參考
https://research.google.com/archive/mapreduce.html