1 Lambda Architecture Introduction
The lambda architecture is divided into three tiers. Each is a batch layer, a service layer, and an accelerator layer.
Finally, you can use the following expression to illustrate the effect.
query = function (ALLDATA)
1.1 Batch layer (batch layer, Apache Hadoop)
The batch processing layer is implemented by Hadoop, which is responsible for storing data and generating arbitrary view data.
The calculation of the view data is a sequential operation, so. When new data arrives, use MapReduce to iterate over the data into the view.
The resulting view is computed in the dataset, which makes it not updated frequently. Depending on the size of your dataset and the size of your cluster, it takes about a few hours to calculate the time of the iteration conversion.
1.2 Service Layer (serving layer,Cloudera Impala)
The service layer is implemented by the Cloudera Impala Framework, which, in general, uses the main features of Impala. The output from the batch is a series of original files that include an estimate view. The service layer is responsible for building the index and rendering the view. So that they can be very well queried.
Because batch views are static, the service layer only needs to provide bulk updates and random reads, and Cloudera Impala just fits our requirements. In order to render the view using Impala. The entire service tier is the creation of a table in the hive metadata. These metadata point to the files in HDFs. The user can then immediately use Impala to query to the view.
Hadoop and Impala are excellent tools for batch and service tiers.
Hadoop can store and process gigabytes (petabytes) of data, and Impala can query this data at high speed and interactively. But. Batch and service tiers exist separately to meet real-time requirements. The reason is that MapReduce has a very high latency in design and it takes a few hours to present the new data to the view. It is then passed through the media to the service layer.
That's why we need to accelerate the layer.
1.3 Acceleration Layer (Speed layer,Storm, Apache HBase)
in essence, the accelerator layer is the same as the batch layer, which is calculated from the data it receives and gets the view. The accelerator layer is designed to compensate for the high latency of the batch layer, which solves the problem by computing the real-time view of the Strom framework. Live view includes only data results to supply batch views. at the same time, the design of a batch is to repeatedly compute the batch view from the retrieved data, and the accelerator layer uses the incremental model, given that the live view is incremental. The genius of the accelerator layer is the real-time view as a temporary amount. As long as the data is propagated to the batch, the corresponding live view results in the service layer are discarded. This is called "total isolation," meaning that the complex parts of the architecture are pushed into the hierarchy . and The result of the structure layer is temporary, the large generosity of the continuous processing view.
The puzzling part is the rendering of live view. So that they can be queried. and using a batch view merge to get all the results.
Because the live view is incremental. The accelerator layer needs to be randomly read and written at the same time. To do this, I will use the Apache HBase database.
HBase provides the ability to continuously incrementally increment the live view of a storm. At the same time, provide Impala with the results of querying the batch view merge. Impala queries are stored in HDFs in batch view and live view stored in HBase, which makes Impala a fairly perfect tool.
The lambda abstraction architecture can also describe the narrative:
Big Data Lambda architecture