Apache Hadoop is an efficient, scalable, distributed computing open source project.
The Apache Hadoop Library is a framework that allows for distributed processing of large datasets and compute clusters using a simple programming model. It is designed to scale from a single server to a thousands of machine, each offering local computing and storage. Rather than relying on hardware to provide high availability rows. Its library itself is used to detect and process application layer errors, so it will be in the compute cluster of highly available services, each of which may fail.
Apache Hadoop includes the following modules:
1.Hadoop Common: Universal functionality supports other modules of Hadoop.
2.Hadoop Distributed File System (hdfs™): Distributed filesystem provides high throughput access to application data.
3.Hadoop YARN: For job scheduling and cluster resource management framework.
4.Hadoop MapReduce: A yarn based system for parallel processing of large datasets.
Other Apache projects related to Hadoop include:
1.ambari™: A web-based tool for configuring, managing, and monitoring the Apache Hadoop cluster, the Apache Hadoop cluster includes the Hadoop hdfs,hadoop hive,hadoop Mapreduce,hcatalog, Hbase,zookeeper,oozie,pig and Sqoop. Ambari also provides a dashboard for monitoring the health of view sets, such as heat meters and the ability to observe mapreduce, and pig and hive apply vision and features to diagnose performance in a user-friendly manner.
2.avro™: Data serialization system.
3.cassandra™: There is no single point of failure for an upgradeable, multiple-master database.
4.chukwa™: A large data acquisition system for managing large distributed systems.
5.hbase™: A scalable, distributed database that supports structured data storage for large tables.
6.hive™: The infrastructure of the data warehouse, supporting data aggregation and project queries.
7.mahout™: An extensible machine learning and data Mining library.
8.pig™: Advanced Data Flow language and execution framework for parallel computing.
9.spark™: A fast and universal Hadoop data Engine for computing. It provides a simple and expressive programming model that supports a wide range of applications, including ETL, machine learning, data flow processing, and graphic computing.
10.zookeeper™: A high-performance coordinated service for distributed applications.