Both Spark and Hadoop MapReduce are open-source cluster computing systems, but the scenarios for both are not the same. Among them, Spark is based on memory calculation, can be calculated by memory speed, optimize workload iteration process, speed up data analysis processing speed; Hadoop mapreduce processes data in batches, and it takes a long time to get results after each task is started. In the process of computing data such as machine learning and database queries, spark can handle more than 100 times times more than Hadoop MapReduce. As a result, spark is more suitable for real-time demanding computational processing applications, and Hadoop MapReduce is more suitable for non-real-time computing applications with massive data analysis. At the same time, compared to Hadoop Mapreduce,spark code is more streamlined, and its API interface can support Java, Scala and Python and other common programming languages, more user-friendly.
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.