Enterprise-Class Big Data processing solutions have three business scenarios:
1. Offline processing; (MapReduce (first generation), Sparksql (second generation))
2. Real-time processing; (Database operation, Storm)
3. Quasi-real-time processing. (Spark streaming)
MapReduce vs. Spark
Mr vs Spark Pros and cons: (i)
A.mapreduce frequent data reading and writing, which makes data processing speed lag
B.spark all calculations are eliminated in the memory, and the disk reads and writes this fast
Mr vs Spark Pros and Cons: (ii)
A.mapreduce Each calculation process has no lineage inheritance from the previous computational process
B.spark-Step operation has lineage inheritance, ageline mechanism, can be traced back to the data source or checkpoint (lazy level) fault tolerance and automatic optimization of the framework
Mr vs Spark Pros and Cons: (iii)
Visualization of program runs, Spark has DAG diagram
Comparison of streaming and storm
Pros and cons of streaming vs Storm: (i)
A.storm calculates the equivalent of an electric current, which is coming in there.
B.streaming in order to improve throughput, the introduction of batch processing, sacrificing real-time, increased throughput
Pros and cons of streaming vs Storm: (ii)
A.storm topology calculation mode (with direction-free graph)
B.streamingdag graphs, streaming calculations, intermediate data exchange can call rich computational frameworks, SQL and ML and GRAPHX
Traditional database real-time operation versus big Data technology:
A. Although real-time, but the volume of data is large, can not be solved, the speed down
B. Big data technology that solves big data, sacrificing real-time
In summary: Any technology is not perfect, in order to the one hand must sacrifice on the other hand, no one. Throughput versus real-time trade-offs, memory and speed tradeoffs. -Is there any need for offline processing when the ability to process data in real time increases? If the data processing is not real-time, it will inevitably affect the value. The core value of Big data: data mining and data analysis ultimately serve the behavior and decision-making power of data consumers.
Continue to reflect: since each big data processing technology is flawed, how can we achieve the perfect effect in our hearts?
The Three Kingdoms Caocao choose the strategy of talent-things to do their best, as long as you have, is not let you buried.
So the Big data processing scheme is not a simple technology of the world, but the close integration of each block, complementary advantages, and thus achieve the desired effect. Therefore, it is important to understand the advantages of technology and use scenarios in order to select the right technology in the actual business scenario. So far:
1.Hadoop can only be used as storage and resource management
2.spark can only be calculated, (calculation only)
3.storm calculation only
4.kafka data cache layer (balanced streaming data volume is too small)
5.flume acquisition
7.Tachyon Distributed Memory File system (ALLUXIO)
8.redis, MongoDB Distributed Memory Database
9. Real-time search engine SOLR and Lucene
Basically the mainstream technology of data processing is here, although the above is a kind, but each kind of technology has a variety, the use is the same, but the scene and processing logic is different. How to build enterprise-class big data processing, and look at the next.
Enterprise-Class Big Data processing solution-01