When it comes to big data frameworks, Hadoop and Spark are the hottest, but we tend to understand them literally, without thinking deeply about them, and what technologies are being used in the industry now? What are the similarities and differences between the two? What problems have they solved? Let's see what the difference is between them.
the problem-solving dimension is different.
First, Hadoop and Apache Spark are both big data frameworks, but their purpose is different. Hadoop is essentially more of a distributed data infrastructure: it allocates huge datasets to multiple nodes in a cluster of ordinary computers to store, meaning you don't need to buy and maintain expensive server hardware, and Hadoop indexes and tracks that data. Make big data processing and analytics more efficient than ever; Spark, then, is a tool dedicated to the processing of large, distributed storage, which does not store distributed data.
Both can be divided into
In addition to providing a common understanding of HDFS distributed data storage capabilities, Hadoop also provides data processing functions called MapReduce, so we can completely throw off spark and use Hadoop's own mapreduce to do the processing of the data. Spark is not dependent on Hadoop to survive, but as mentioned above, it does not provide a file management system, so it must be integrated with other distributed file systems to operate, where we can choose the HDFs of Hadoop or other cloud-based data system platforms , but Spark is still used on Hadoop by default, after all, it's considered the best combination.
spark data processing speed seconds kill MapReduce
Because it handles data in a different way than mapreduce, the mapreduce processes the data in steps: "reads data from the cluster, processes it, writes the results to the cluster, reads the updated data from the cluster, and the next processing, writes the results to the cluster , et cetera, "Booz Allen Hamilton's data scientist, Kirk Borne, has done all the data analysis in memory in close to" real time ":" Read the data from the cluster, complete all the necessary analytical processing, write the results back to the cluster, and finally complete " , Spark's batch process is nearly 10 times times faster than MapReduce, in-memory data analysis is nearly 100 times times faster, and if the data and result requirements that need to be processed are mostly static, and you have the patience to wait for the batch to complete, The way MapReduce is handled is also perfectly acceptable, but if you need data from the stream to be analyzed, such as those collected by sensors from the factory, or if your application requires multiple data processing, then you might want to use spark for processing, Most machine learning algorithms require multiple data processing, and in addition, Spark's application scenarios are often used in the following areas: real-time marketing campaigns, online product recommendations, network security analysis, machine diary monitoring, and more.
Disaster Recovery
This article was reproduced from:http://www.linuxprobe.com/2-minutes-read-hadoop-spark-differences.html
more Linux Dry Goods visit:http://www.linuxprobe.com/
2 minutes to read the similarities and differences between Hadoop and spark