Hadoop authoritative Guide to learning notes one
Disclaimer: This article is my personal understanding and notes based on the authoritative guide of Hadoop, only for learning reference, there is nothing to point out, together learn to progress together.
Reprint Please specify: HTTP://BLOG.CSDN.NET/MY_ACM
1.
Data growth far exceeds the disk read speed, the traditional way of data storage and analysis has become no longer suitable for big data processing.
Hadoop is divided into two core technologies, HDFS (hadoopdistributed file system-distributed Hadoop files processing system) and MapReduce (divided into map-data mapping, reduce-data merging, etc.).
Based on the HDFs Distributed file system, the data can be distributed in a wide range of different (of course, can be centralized data) for parallel processing analysis, and the ability to process the analysis with the MapReduce algorithm analysis. Because of this distributed system, it can run on a machine that is not very high performance, so the cost is not high.
2.
Why can't traditional RDBMS adapt to big data processing?
First, take a look at the following table:
More importantly, the number of structured data between the Mapruduce and the relational database that they handle for the data set.
The so-called structured data is the accurate definition of the manifested data, in line with a predefined pattern. The usual semi-structured data and unstructured data are not handled well on the RDBMS, but the maprduce can handle the unstructured data well.
Of course, the difference between the RDBMS and the MapReduce over time may become blurred.
Two important advanced query languages based on MapReduce pig and hive.
3.
Hadoop is the founder of Dougcutting--apache Lueene, a widely used text-search repository. Although Hadoop is best known for MapReduce and HDFs, it has some other ancillary services.
Hadoop authoritative Guide to learning notes one