Discover mapreduce, include the articles, news, trends, analysis and practical advice about mapreduce on

Big data analysis 6 core technologies you can't understand

At present, a large number of new technologies are emerging every year in the big data field, which is an effective means for big data, storage, processing analysis or visualization.

Five trends in the processing and development of big data in the future

In recent years, big data has changed from the popular words and concepts unique to large companies to the driving force behind the development of our digital life. The following are five trends in the processing and development of big data in the future.

Based on Hadoop big data analysis application scenario and actual combat

In order to meet the ever-changing business changes, Jingdong's Jingmai team has adopted a popular open source big data calculation engine such as Hadoop on the basis of Jingdong Big Data Platform to create a decision-making data product for JD operations and products.

Hadoop: A Detailed Explanation of the Working Mechanism of MapReduce

Hadoop is more suitable for solving big data problems, and relies heavily on its big data storage system, namely HDFS and big data processing system. For MapReduce, we know a few questions.

Deep Understanding of MapReduce Architecture and Principles

MapReduce in Hadoop is a simple software framework based on which an application can run on a large cluster of thousands of commercial machines and process terabytes of data in parallel with a reliable fault tolerance.

MapReduce Tutorial (1) Based on MapReduce Framework Development

MapReduce is a programming model for parallel computing of large-scale data sets (greater than 1TB) to solve the computational problems of massive data.

A Brief and Workflow of MapReduce

This article briefly describes the execution steps and workflow of the mapreduce programming model in the form of graphics, which is simple and easy to understand.

Hadoop Learning - MapReduce Principle and Operation Process

Earlier we used HDFS for related operations, and we also understood the principles and mechanisms of HDFS. With a distributed file system, how do we handle files? This is the second component of Hadoop-MapReduce.

MapReduce Principles and Examples in Hadoop

Hadoop MapReduce is a programming model for data processing that is simple but powerful enough to be designed for parallel processing of big data.

Detailed MapReduce Shuffle Process - Sharding, Partitioning, Merging, Merging …

In MapReduce, shuffle is more like the inverse process of shuffling, which refers to "disrupting" the random output of the map end according to the specified rules into data with certain rules so that the reduce end can receive and process it.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.