Hadoop family products, commonly used projects include Hadoop, Hive, Pig, HBase, Sqoop, Mahout, Zookeeper, Avro, Ambari, Chukwa, new additions including, YARN, Hcatalog, Oozie , Cassandra, Hama, Whirr, Flume, Bigtop, Crunch, hue, etc. Since 2011, China has entered the era of big data surging, and the family software, represented by Hadoop, occupies a vast expanse of data processing. Open source industry and vendors, all data software, no one to Hadoop closer. Hadoop has also become the standard for big data development in the areas of high-fidelity from a small audience. On top of Hadoop's legacy technology, the Hadoop family of products has evolved through the concept of "big data". As a developer of it, we have to keep up with the rhythm, seize the opportunity, and follow Hadoop together!
The preface has been using Hadoop for some time, from the beginning of confusion, to a variety of attempts, to the present combination of applications .... Things that are slowly involved in data processing are already inseparable from Hadoop. The success of Hadoop in the field of big data has led to its own accelerated growth. Now the Hadoop family of products has reached more than 20. It is necessary to do a collation of their own knowledge, the product and technology are strung together. Not only can deepen the impression, but also for the future technical direction, technology selection to prepare the foundation.
1. Hadoop family Products by 2013, according to Cloudera statistics, the Hadoop family of products has reached 20!
http://blog.cloudera.com/blog/2013/01/apache-hadoop-in-2013-the-state-of-the-platform/Next, I divided the 20 products into 2 categories.
- The first kind is what I've mastered.
- The second category is Todo ready to continue learning
Product Description: Apache Hadoop: A distributed computing open source framework for Apache Open source organization that provides a distributed File system subproject (HDFS) and a software architecture that supports mapreduce distributed computing.
- Apache Hive: A Hadoop-based data warehousing tool that can map structured data files into a database table, quickly implement simple mapreduce statistics with class-SQL statements, and do not have to develop specialized mapreduce applications, which is well suited for statistical analysis of data warehouses.
- Apache Pig: A large-scale Hadoop-based data analysis tool that provides the Sql-like language called Pig Latin, which translates SQL-like data analysis requests into a series of optimized mapreduce operations.
- Apache HBase: is a highly reliable, high-performance, column-oriented, scalable distributed storage system that leverages HBase technology to build large, structured storage clusters on inexpensive PC servers.
- Apache Sqoop: A tool used to transfer data from Hadoop and relational databases to and from a relational database (MySQL, Oracle, Postgres, etc.) into the HDFs of Hadoop, HDFs data can also be directed into a relational database.
- Apache Zookeeper: Is a distributed, open source Coordination service designed for distribution applications, which is mainly used to solve some data management problems frequently encountered in distributed applications, simplify the coordination and management of distributed applications, and provide high-performance distributed services.
- Apache Mahout: A distributed framework for machine learning and data mining based on Hadoop. Mahout implements some data mining algorithms with MapReduce, and solves the problem of parallel mining.
- Apache Cassandra: is a set of open source distributed NoSQL database system. It was originally developed by Facebook to store simple format data, a data model for Google BigTable and a fully distributed architecture of Amazon Dynamo
- Apache Avro: is a data serialization system designed to support data-intensive, large-volume data exchange applications. Avro is a new data serialization format and transfer tool that will gradually replace the original IPC mechanism of Hadoop
- Apache Ambari: is a web-based tool that supports the provisioning, management, and monitoring of Hadoop clusters.
- Apache Chukwa: is an open source data collection system for monitoring large distributed systems that can collect all kinds of data into Hadoop-ready files to be stored in HDFS for various MapReduce operations in Hadoop.
- Apache Hama: Is an HDFs-based BSP (Bulk synchronous Parallel) Parallel computing framework Hama can be used for large-scale, big data calculations including graphs, matrices, and network algorithms.
- Apache Flume: is a distributed, reliable, high-availability system of large-volume log aggregation, which can be used for log data collection, log processing, and log transfer.
- Apache Giraph: is a scalable distributed iterative processing system based on the Hadoop platform, inspired by the BSP (bulk synchronous parallel) and Google Pregel.
- Apache Oozie: is a workflow engine server that manages and coordinates the tasks that run on the Hadoop platform (HDFS, pig, and MapReduce).
- Apache Crunch: Is a Java library written based on Google's Flumejava library for creating MapReduce programs. Similar to Hive,pig, Crunch provides a library of patterns for common tasks such as connecting data, performing aggregations, and sorting records
- Apache whirr: A class library that runs on cloud services, including Hadoop, to provide a high degree of complementarity. WHIRR supports the services of Amazon EC2 and Rackspace.
- Apache bigtop: A tool for packaging, distributing, and testing Hadoop and its surrounding ecosystems.
- Apache Hcatalog: Hadoop-based data table and storage management for central metadata and schema management, spanning Hadoop and RDBMS, and providing relational views with pig and hive.
- Cloudera Hue: A web-based monitoring and management system that implements Web operations and management of Hdfs,mapreduce/yarn, HBase, Hive, Pig.
2. The Hadoop Family Learning roadmap below I will describe the installation and use of each product separately, summarizing my learning route in my experience. Hadoop
- Hadoop Learning Roadmap
- Yarn Learning Roadmap
- Build Hadoop projects with Maven
- Hadoop Historical Version Installation
- Hadoop Programming Calls HDFs
- Massive Web log analysis using Hadoop to extract KPI statistic index
- Build a movie recommendation system with Hadoop
- Create a Hadoop parent virtual machine
- Cloning virtual machines adds Hadoop nodes
- R Language for Hadoop injection statistics blood
- One of the Rhadoop Practice series Hadoop Environment Setup
Hive
- Hive Learning Roadmap
- Hive Installation and Usage tips
- Test for hive import of 10G data
- The Hive of the R Sword NoSQL series article
- Extracting inverse repurchase information from historical data using rhive
Pig
Zookeeper
- Zookeeper Learning Road Map
- Installation and use of zookeeper pseudo-step cluster
- Zookeeper implementing distributed queue queues
- Zookeeper implementing a distributed FIFO queue
HBase
- HBase Learning Roadmap
- Rhadoop Practice series of four rhbase installation and use
Mahout
- Mahout Learning Road Map
- Using R to parse Mahout user recommended collaborative filtering algorithm (USERCF)
- Rhadoop Practice series of three R implementation of MapReduce collaborative filtering algorithm
- Build Mahout projects with Maven
- Mahout Recommended Algorithm API
- Profiling Mahout recommendation engine from source code
- Mahout development of collaborative filtering ITEMCF based on item-by-step program
- Mahout-Step program development of clustering Kmeans
- Building a job recommendation engine with Mahout
Sqoop
Cassandra
- Cassandra Learning Road Map
- Cassandra single cluster Experiment 2 nodes
- The Cassandra of the R Sword NoSQL series article
Hadoop Learning Roadmap