Big Data We all know about Hadoop, but not all of Hadoop. How do we build a large database project. For offline processing, Hadoop is still more appropriate, but for real-time and relatively strong, data volume is relatively large, we can use storm, then storm and what technology collocation, in order to do a suitable for their own projects.
1. What are the characteristics of a good project architecture?
2. How does the project structure ensure the accuracy of the data?
3. What is Kafka?
How does 4.flume+kafka integrate?
5. What script can I use to see if Flume is transmitting data to Kafka?
Software development is fully aware of the modularity of the idea, so the reasons for the design of two aspects:
On the one hand, it can be modularized, the function is divided more clearly, from the "Data acquisition--access--loss calculation-data output/storage"
1 "Data collection
Responsible for collecting data in real time from each node and choosing Cloudera Flume to realize
2 "Data access
Because the speed of data acquisition and the speed of data processing are not necessarily synchronous, a message middleware is added as a buffer, using Apache's Kafka
3 "Flow-type calculation
Real-time analysis of collected data, using Apache's storm
4 "Data transfer
Persistent with the results of the analysis, tentatively using MySQL
On the other hand, after the modularization, if the storm has been hung out, data acquisition and data access will continue to be running, the information is not lost, storm up can continue to flow calculation;
So let's take a look at the overall architecture diagram.
Detailed description of each component and installation configuration:
Flume
Flume is a distributed, reliable, and highly available log collection system for Cloudera, which supports the customization of various types of data senders in the log system for data collection, while Flume provides simple processing of data The ability to write to various data-receiving parties (customizable).
Flume provides 2 modes from console (console), RPC (THRIFT-RPC), text (file), tail (UNIX tail), syslog (syslog system, TCP and UDP support), The ability to collect data on data sources such as exec (command execution), which is currently used by exec in our system for log capture.
Flume data recipients, which can be console (console), text (file), DFS (HDFs file), RPC (THRIFT-RPC), and syslogtcp (TCP syslog log system), and so on. It is received by Kafka in our system.
Flume Download and Documentation: Http://flume.apache.org/Flume installation:
$tar zxvf apache-flume-1.4. 0-bin.tar.gz
Flume Start command:
$bin/flume-ng agent--conf conf--conf-file conf/flume-conf.properties--name Producer-dflume.root.logger=info, Console
Big Data architecture: FLUME-NG+KAFKA+STORM+HDFS real-time system combination