This is a creation in Article, where the information may have evolved or changed.
As a go language layman, this time just using Go to implement a distributed indexing system simulation experiment, this article summarizes the implementation process and experience.
Distributed Storage Index technology is a technical focus of distributed storage, in order to verify the design of an index, it is necessary to design a simulation test to verify whether the performance indicators are satisfactory.
Before implementing the system, my cognitive level of go language is very elementary, the choice of the unfamiliar go language as the implementation of the language is mainly caused by the following:
- The Go language has a more convenient package management scheme, such as the use
go get
of commands and third-party GODEP packages to achieve dependency management is very convenient. In the experiment, the Go language became a good choice because it was implemented with a third-party B + Tree.
- The Go language has excellent compilation and execution speed. As a language of compiling and executing, it can get such a good compiling speed as the scripting language, and also get good execution efficiency, which is quite attractive for a simulation experiment to fill large amount of data.
- The grammar of the Go language is relatively concise without losing its power. Although the first contact, feel the grammar of Go language more tangled. But compared to C + + is much more concise, the function is in turn richer than c. The features of garbage collection make it the ultimate differentiator.
The design of the system is described in detail below.
Problem Description
Before introducing the system I designed, let me introduce the questions and the corresponding requirements. At the same time, this step will simplify the problem as much as possible.
Suppose we have a distributed storage system consisting of a $n $ storage node, each of which stores part of the overall data. Then there is a continuous query request that may randomly access any one of the nodes in the system. If the requested node does not contain this data, then it is responsible for the corresponding node to find the data and return to the client.
The design of a simple double-decker index is that each node has a Local index and a Global index. After the query is received, look in the Local index first, find the failure, and then look in the Global index for the node that might contain the target data. In addition, we hope that if the target data of a query is not present in the whole system, it should be discovered as early as possible, so as to avoid the higher cost operation of forwarding queries. So in a double-decker index, we want to be able to determine the existence of the target data of the query as much as possible when we look at Global index.
It is generally used to measure the above requirements with False Positive this indicator. The so-called False Positive, in a nutshell, can be thought of as a non-existent data in a system that is considered to exist when queried in Global Index. The most straightforward design for Global Index is to put the Local index of each node intact. This method guarantees that there is no false Positive, but takes up a larger space and must be trade OFF to achieve the lowest possible false Positive value within the acceptable range of space.
In addition, the query cost of querying Global index is much less than the local index, in which case we can change the two-tier index model to always look for the global index first, and then find the local index after the node is determined.
To simplify the problem, the data stored in the entire test system is considered static. In other words, the experimental steps are to insert all the test data into the system before executing the test query. During the testing process, it is also not considered to establish redundant backups for the data.
Model
The three important models and data structures used in the system I designed were:
- Fat Tree
- B + Tree
- Bloom Filter
The following sections describe these models and their effects respectively.
Fat Tree
Fat Tree is not a data structure for storing data, but a common network topology model. In order to calculate the time consuming of a search request to be forwarded from one server to another, the structure of the Fat Tree can be used.
It's a bit inaccurate to call Fat tree a tree, but he's more like a star-shaped network. A three-layer Fat Tree structure consists of a core layer, a aggregation layer, and an edge layer of three layers, all of which have routers. Each route in the Fat Tree has a $k $ port, and we use half the port of each route in the edge layer to connect to the host and half to link the aggregation layer. At the same time, the route of the $\frac{k}{2}$ edge layer and the nodes of the $\frac{k}{2}$ aggregation layer are put together to form a complete two-part graph, called a Pod. The routing of each aggregation layer and the node link of the $\frac{k}{2}$ core layer, the core layer routes of the different aggregation layer routing connections in the same Pod are not duplicated. Obviously, we need to $\frac{k^2}{4}$ a core layer of routing, the total number of hosts that can be connected is $\frac{k^3}{4}$.
The network constructed by the above method has a feature that each route $k $ ports are exploited. Across the network, there are only three possible ways to communicate between any two hosts:
- 2 edges are required between hosts connected to the same edge layer route
- 4 edges are required between hosts connected to different edge layer routes of the same Pod
- 6 edges between hosts in different pods
An example of a Fat Tree $k =4$ is as follows:
Number of hosts from left to right, given $k $, communications originating node A and target node B, the function of the calculation jump written with Go is as follows. Because in our system, B is going to return the lookup results to a, so all the routes are multiplied by two.
1 2 3 4 5 6 7 8 9 10 11
|
func int int { 2 Switch { Case A/HK = = B/hk: return 4 Case a/k = = b/k: return 8 default: return A } }
|
B + Tree
The content of B + Tree does not need to be discussed, it is a common index structure. It is used to store $n the spatial complexity of the elements is $O (n) $, and the time complexity of insertions, lookups, and deletions are $O (\log_b N) $, which is a very efficient way to index.
As I said earlier, one important reason for choosing the go language to write this experiment is the easy-to-use management mechanism of go, which allows you to work directly with code that is hosted on Github, just by using go get
commands to fetch the code. Here I use the third-party library of Cznic/b.
1
|
Go get github.com/cznic/b/
|
Only one dependency is used in this article, but for projects with multiple dependencies, we may need a tool like the Python pip or Ruby Gem. One step closer, a ruby bundle-like tool would greatly increase productivity in order to isolate the environment for different projects. GPM and GVP partner use is a better solution.
I use B + Tree as the Local Index for each node in the experiment, in order to calculate the computational cost of querying the B + tree, you can take advantage of the functional programming capabilities provided by the Go language, and use closures to get the context to count comparisons. cznic/b
this B + Tree implementation allows you to pass in a function as a function to compare Key sizes. I used the following struct to define a Node.
1 2 3 4 5 6 7 8 9 |
type struct { ID int bloomsize int Hashcount int Cmpcount int//A field to count comparing on this node Bplustree *b.tree Bloomfilter []UInt64 ItemCount int }
|
Use the following method to initialize a Node and its B + Tree.
1 2 3 4 5 6 7 8 9 10 11 12 13 14
|
N: = new(Node) N.bplustree = b.treenew (func(A, B interface{}) int { na: = A. (UInt64) NB: = B. (UInt64) n.cmpcount++ //Count comparing switch { Case na > NB: return 1 Case na = = NB: return 0 default: return -1 } })
|
Thanks to the Go support for anonymous functions and closures, we are able to implement this feature more gracefully.
Bloom Filter
Here at Global Index, I chose a very simple solution: Bloom Filter. Simply put, Bloom Filter uses $k $ A different hash function when inserting data, maps a Key to a different position on an integer array, and marks the corresponding position as 1. When querying, hash the requested key using the same hash function, checking that the corresponding $k $1 is a location. If it is, the key corresponding to the value is likely to exist, otherwise it must not exist.
In order to save space, we mark the unit in a single bits, set all the elements in the array in Int
total $m $ bits, the stored data total $n $, then theoretically the Bloom Filter query False Positive probability estimation formula is:
$$\left (1-e^{-kn/m} \right) ^k$$
As can be seen from the above formula, Bloom filter has the advantages of simple implementation and small footprint, but the higher the amount of data stored, the higher the probability of False Positive, the worse the filtering effect. At the same time, Bloom filter does not have a very convenient method for deleting elements, and maintaining Bloom filter is relatively complex when deleted.
Bloom Filter is a good choice without considering deleting elements. and the Go language standard library, has provided the MD5, SHA1, ADLER32 and CRC64, and so on the hash algorithm implementation, just import comes in can use, very convenient:
1 2 3 4 5 6
|
Import ( "Crypto/md5" "CRYPTO/SHA1" "Hash/adler32" "HASH/CRC64" )
|
Test data generation and file reading
In order to test the performance of the system we are designing, we need to generate some specific distributed test data and the corresponding query data. The two common distributions are uniform distribution and Zipf distribution. In particular, the ZIPF distribution, including the frequency of the occurrence of words in English, is subject to the distribution of many important data. Therefore, the search engine uses the keyword index system should pay particular attention to this distribution.
In order to simplify the problem, we take the test data of two kinds of distributions in advance, and then read out and insert the index in the test sequence. The test data are integer numbers and are inserted as keys into the B + Tree. Using the NumPy library in Python to generate random data for a particular distribution is as follows:
1 2 3 4 |
import numpy as NP np.random.normal (0 , Span class= "number" >1280000 , 100000 ). Astype (int) # build Evenly distributed random number within 100,000 0~1280000 np.random.zipf (2 ,
Span class= "number" >100000 ) # generate parameters a=2 100,000 Zipf distribution of random numbers |
Save the generated data as a text file, and then just read it in the Go program. As a Python heavy user, here I would like to use a syntax like Generator to let the function output a number in one file at a time. Although the Go language does not have yield
that syntax, it is possible to achieve similar functions through channel and Goroutine. Write it down like this:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21st 22 23
|
func iterfile (filepath string) chan uint64 { ch: = make (chan uint64) go func() { fi, err: = OS. Open (filepath) if err! = Nil { Panic(ERR) } defer fi. Close () var i UInt64 For { _, Err: = FMT. Fscanf (FI, "%d", &i) if err! = Nil { if err = = Io. EOF { Break } Panic(ERR) } Ch <-i } close(CH) }() return ch }
|
It is worth noting that defer fi.Close()
this line, the defer
keyword generated instruction will be executed at the end of the current goroutine, to avoid forgetting to release the file problem, is a very elegant syntax. More convenient is that we can also use for
loops to continuously draw values from the channel.
1
|
For I: = Range Iterfile ("Somefile.txt") { //do somthing ...}
|
While studying the channel, I found that although multiple values can be returned in a function, there is no such type as a tuple in the Go language. So it's not possible to create a channel that transmits multiple values (unless used interface{}
), which is a small detail about the go language.
Perform simulation experiments
To balance the amount of data stored by each server, you can hash the key you want to insert, and then decide which node to store based on the hashed value. This can be a good way to Zipf such a density distribution of unbalanced data evenly dispersed. The next step is to simulate the experiment.
The simulation results for 100,000 queries of evenly distributed and ZIPF distributed data are as follows:
1
|
* Testing Uniform distribution Sparse setinserting Keys ... Keys Inserted: 235195Testing point Search ... Average comparing: 2.18Average OK comparing: 10.02Average Fail comparing: 10.90Average Transfer Cost: 2.50False Positive Proportion: 3.92%* testing Zipf distribution Sparse setinserting Keys ... Keys Inserted: 230581Testing point Search ... Average comparing: 8.06Average OK comparing: 9.58Average Fail comparing: 10.92Average Transfer Cost: 9.78False Positive Proportion: 3.42%
|
Summarize
This article summarizes my recent implementation of a simple Distributed index simulation test program. The current system design is simply too simple, for example, without taking into account the redundant backup of data and so on. But overall, the performance of the system is satisfactory for both distributions.
In the process of implementing the program, I have more knowledge of some aspects of the Go language. In my opinion, the Go language is a promising language and may still not be able to replace C in some situations, but C + + seems to be no longer competitive. Of course, Go now lacks some GUI libraries, scientific computing libraries, and so on, but I believe that as time goes by it will show more and more vitality.