Now Apache Hadoop has become the driving force behind the development of the big data industry. Techniques such as hive and pig are often mentioned, but they all have functions and why they need strange names (such as Oozie,zookeeper, Flume).
Hadoop has brought in cheap processing of large data (large data volumes are usually 10-100GB or more, with a variety of data types, including structured, unstructured, etc.) capabilities. But what's the difference?
Today enterprise data warehouses and relational databases are good at handling structured data and can store large amounts of data. But the cost is somewhat expensive. This requirement for data limits the types of data that can be processed, and the drawbacks of this inertia affect the search for agility in data warehouses when confronted with massive amounts of heterogeneous data. This usually means that valuable data sources are never mined within the organization. This is the biggest difference between Hadoop and traditional data processing methods.
This article describes the components of the Hadoop system and explains the functionality of each component.
The Hadoop ecosystem contains more than 10 components or subprojects, but it faces challenges in terms of installation, configuration, and deployment of cluster size and management.
Hadoop main components include:
Software framework written by Hadoop:java to support data-intensive distributed applications
Zookeeper: High reliability distributed Coordination system
MapReduce: Flexible Parallel data processing framework for large data
Hdfs:hadoop Distributed File System
Oozie: Responsible for MapReduce job scheduling
Hbase:key-value Database
Hive: A data Warehouse package built on Maprudece
Pig:pig is a high-level data processing layer that is architected over Hadoop. The Pig correlation language provides programmers with a more intuitive way of customizing data flow.
The application of the Hadoop MapReduce method and the typical characteristics of a large number of small or no data dependencies containing structured and unstructured data suitable for large-scale parallel processing applications use cases fast enough batch analyzer to meet business requirements and business reports, such as site traffic and product recommendation analysis. Iterative analysis using data mining and machine learning algorithms. such as association rule Analysis K Data aggregation, link analysis (data analysis Technology), data mining classification, famous Bayes algorithm analysis. Statistical analysis and refinement, such as Web log analysis, data analysis behavior analysis, such as click Flow Analysis, user video behavior, such as conversion and enhancement functions, such as social media, ETL processing, data standardization, etc.
Typically, Hadoop is applied to a distributed environment. As in previous Linux situations, vendors integrated and tested the components of the Apache Hadoop ecosystem and added their own tools and management capabilities.