/* Copyright notice: Can be reproduced arbitrarily, please be sure to indicate the original source of the article and the author information . */

Copymiddle: Zhang Junlin

**Excerpt from "Big Data Day know: Architecture and Algorithms" Chapter 14, book catalogue here**

**1. Graph calculation using MapReduce**

There are relatively few studies using the MapReduce framework to calculate large-scale graph data, mainly due to two reasons: on the one hand, it is not very intuitive to map the traditional graph calculation to the many tasks of the MapReduce task relative to other types, on the other hand, from a certain point of view, It is also not the most appropriate solution to use this distributed computing framework to solve diagram calculation tasks.

Despite these shortcomings, many graph algorithms can be converted to computational tasks under the MapReduce framework. The following is an example of how to calculate a graph under this framework, using the PageRank calculation. The calculation principle of PageRank has been introduced earlier, this section focuses on how to transform the algorithm under the MapReduce framework, so that large scale graphs can be computed by multi-machine distribution.

The input in the MapReduce framework is often the Key-value data pair, where value can be a simple type, such as a value or a string, or a complex data structure, such as an array or a record. For the graph data, its internal representation is appropriate to the adjacency table, so that the input data key is the graph node ID, the corresponding value is a complex record, which records the adjacency table data, key node PageRank value and so on.

For many graph algorithms, the shuffle and sort operations in the MapReduce internal computation process are similar to the effects of message propagation through the nodes ' edges in the graph. The use of this technique is visible from the PageRank pseudo code in Figure 14-7.

in this example map operation, the key of the input data is the graph node Id,value is the graph node data structure n, which includes the adjacency table adjacencylist information and the node corresponding to the current PageRank value. The 3rd line calculates the PageRank score that the current node propagates to the Adjacency node, and the 5th, 6 lines of code convert the scores to the new key1-value1, with the adjacency node ID as the new key, and the score from the current node to the Adjacency node as the new value1. In addition, the node information for the current node needs to be retained for subsequent iterations, so the 4th line of code propagates the input record itself again intact.

with the shuffle and sort operations within the mapreduce, you can bring together the series value1 of the same key1 corresponding to the PageRank part of the graph node with ID key1 from the other nodes. This is similar to the role of message propagation. In the reduce operation in Figure 14-7 example, the corresponding input data includes the graph node ID and the corresponding PageRank partial score list, and the pseudo code line 4th to line 8th accumulates this part of the score to form a new PageRank value, while judging whether a value1 is a node information (line 5th code). The 9th line of code updates the PageRank value within the node information, and the 10th line of code outputs the updated node information. This completes a round of PageRank iterative processes, and the output of this reduce phase can be used as input data for the next iteration of the map stage. This cycle repeats until the termination condition is met, and the final calculation results are output.

the MapReduce Compute Framework Maps intermediate result records with the same key value to the same machine over network traffic after the map operation to meet the requirements for subsequent reduce phase operations. In general, this kind of network transmission data is very large, often can seriously affect the computational efficiency, and combine operation is to reduce network transmission to optimize efficiency and proposed. Combine operation after the local machine map operation, first the map result data with the same key value part of the local aggregation, so that the project should be transferred separately is merged, greatly reducing the network traffic, speed up the calculation. For graph calculations, you can also use this optimization method to improve efficiency, figure 14-8 shows the corresponding combine operation, the operation of the process and the reduce operation is similar, the 4th line to the 8th line of code to accumulate the same key local value data, the 9th line of code to propagate this accumulated data, Key remains the same, value becomes the aggregated data s, which reduces the amount of network traffic.

The above describes how to perform PageRank calculations under the MapReduce framework, and many other graph algorithms can also be processed using approximate ideas, and the key point is still to aggregate the same node's incoming edges through the shuffle and sort operations described above. The reduce operation can be summed in a similar example, or it may be other types of operations such as Max/min in the edge, depending on the application, but there is no significant difference between the basic ideas.

**2. Problems in the calculation of mapreduce in graphs**

Although MapReduce has become the mainstream distributed computing model, but it has its application scope, for a large number of machine learning data Mining Science computing and graph mining algorithms, the use of MapReduce model, although transformed can be solved, but often is not the best solution to such problems. The root cause is that many scientific calculations or graph algorithms require multiple rounds of iterative iterations, and if the MapReduce model is used, the intermediate results from each iteration need to be written to the local disk repeatedly in the map phase and written to the Gfs/hdfs file system in the reduce phase. The next iteration is generally based on the calculation of the previous iteration, so that it needs to be loaded into memory again, and the new intermediate results are calculated and still written to the local file system and to the Gfs/hdfs file system. This repeated and unnecessary disk input/output seriously affects computational efficiency. In addition, task management overhead, such as re-initializing tasks for each iteration, also has a great impact on efficiency.

The following example illustrates the severity of this problem by using the MapReduce model to calculate the single-source shortest path of the graph. The so-called "single Source Shortest path" is for the graph structure

*G*<

*n*,

*e*> (

*n* is the graph node set,

*e* is the edge set in the graph and the edge has weights, This weight represents the distance between two nodes, and if given initial node

*V*, the minimum distance from the node to any other node in the graph needs to be calculated. In this example, figure 14-9 shows that the internal representation of the graph is using adjacency table scheme. Assuming from source Node A, the shortest distance from the other node to node A, in the initialization phase, set the minimum distance of source Node A is 0, the shortest distance for other nodes is

*INF*(large enough value).

for the MapReduce model, the calculation is divided into two stages, namely the map phase and the reduce phase. In view of the above problems, the initial input of the map phase is the adjacency table of the slightly reconstructed graph G, in addition to the Adjacency table information of the node, it is necessary to record the minimum distance value currently obtained by the node. Represented as a common key-value: key= node id,value=< node to the current minimum distance *Dist*of source Node A, adjacency table >. As an example of source node A, the input of its map stage is: <a, <0, < (B, 5), (D, >>>), and the other node input data is similar in form.

The transformation logic for the input data in the map phase is: Calculates the current shortest distance from the node to source node A in the adjacency table of the key node. The key-value is converted to the key1-value1 sequence, where Key1 is the current shortest distance from the node id,value1 in the adjacency table of the key node to the Key1 node to source Node A. Take the source node A as an example, its input is <a, <0, < (B, 5), (D, >>>), after the map conversion, get output <B,10> and <D,5>,<B,10> The meaning is: the current minimum distance of b node to a node is 10 (obtained from a node to source node A distance 0 plus b node to a node distance 10) The meaning of,<d,5> is similar. This step allows you to complete the map phase calculations, and figure 14-10 shows the KV values converted from the original input to the map stage output.

After the results are generated in the map phase, the system writes the temporary results to the local disk file as input data for the reduce phase. The logic of the reduce phase is: For a node, select the shortest distance from many of this node to source node A as the current value. Taking Node B as an example, from the output of map stage in Figure 14-10, we can see that there are two:<b,10> and <b,inf> for the B key, and the minimum worth to the new shortest distance is 10, then output result <B,<10,< (c,1) , (d,2) >>>. Figure 14-11 shows the output of the reduce phase.

At the end of the reduce phase, the system writes the results to the Gfs/hdfs file system, which completes a round calculation of the single source shortest path. The current shortest path of graph Node B and Graph node D is updated. And in order to get the final result, it is necessary to iterate over and over again in this way, using the reduce output as input to the next map phase. Thus, if the calculation is completed, the intermediate results will need to be exported to the file system multiple times, which can seriously affect the system efficiency. This is the main reason why the MapReduce framework is not suitable for diagram applications.

The MapReduce of Big Data Graph database for graph calculation