Hadoop Family (3)

Source: Internet
Author: User

650) this.width=650; "Title=" Click to view the source page "style=" width:554px;height:281px; "class=" currentimg "src=" http:/ Img2.imgtn.bdimg.com/it/u=499154327,39059020&fm=21&gp=0.jpg "height=" 281 "width=" 554 "alt=" u= 499154327,39059020&fm=21&gp=0.jpg "/>
650) this.width=650; "src=" Http://s4.51cto.com/wyfs02/M01/7B/54/wKiom1bLwZ2ipj8yAACHNSUVHlw242.png "title=" Hadoopfamilysmall.png "alt=" Wkiom1blwz2ipj8yaachnsuvhlw242.png "/>


  • Apache Hadoop: A distributed computing open source framework for the Apache Open source organization that provides a distributed File system subproject (HDFS) and a software architecture that supports mapreduce distributed computing.

  • Apache Hive: A Hadoop-based data warehousing tool that maps structured data files into a database table, quickly implements simple mapreduce statistics with class-SQL statements, and eliminates the need to develop specialized mapreduce applications. It is very suitable for statistical analysis of Data Warehouse.

  • Apache Pig: A large-scale Hadoop-based data analysis tool that provides the Sql-like language called Pig Latin, which translates SQL-like data analysis requests into a series of optimized mapreduce operations.

  • Apache hbase: is a highly reliable, high-performance, column-oriented, scalable distributed storage system that leverages HBase technology to build large, structured storage clusters on inexpensive PC servers.

  • Apache Sqoop: A tool used to transfer data from Hadoop and relational databases to and from a relational database (MySQL, Oracle, Postgres, etc.) into the HDFs of Hadoop, HDFs data can also be directed into a relational database.

  • Apache Zookeeper: is a distributed, open source Coordination service designed for distribution applications, which is mainly used to solve some data management problems frequently encountered in distributed applications, simplify the coordination and management of distributed applications, and provide high-performance distributed services.

  • Apache Mahout: A distributed framework for machine learning and data mining based on Hadoop. Mahout implements some data mining algorithms with MapReduce, and solves the problem of parallel mining.

  • Apache Cassandra: is a set of open source distributed NoSQL database system. It was originally developed by Facebook to store simple format data, a data model for Google BigTable and a fully distributed architecture of Amazon Dynamo

  • Apache Avro: is a data serialization system designed to support data-intensive, large-volume data exchange applications. Avro is a new data serialization format and transfer tool that will gradually replace the original IPC mechanism of Hadoop

  • Apache Ambari: is a web-based tool that supports the provisioning, management, and monitoring of Hadoop clusters.

  • Apache Chukwa: is an open source data collection system for monitoring large distributed systems that can collect all kinds of data into Hadoop-ready files to be stored in HDFS for various MapReduce operations in Hadoop.

  • Apache Hama: is an HDFs-based BSP (Bulk synchronous Parallel) Parallel computing framework Hama can be used for large-scale, big data calculations including graphs, matrices, and network algorithms.

  • Apache Flume: is a distributed, reliable, high-availability system of large-volume log aggregation, which can be used for log data collection, log processing, and log transfer.

  • Apache giraph: is a scalable distributed iterative processing system based on the Hadoop platform, inspired by the BSP (bulk synchronous parallel) and Google Pregel.

  • Apache Oozie: is a workflow engine server that manages and coordinates the tasks that run on the Hadoop platform (HDFS, pig, and MapReduce).

  • Apache Crunch: Is a Java library written based on Google's Flumejava library for creating MapReduce programs. Similar to Hive,pig, Crunch provides a library of patterns for common tasks such as connecting data, performing aggregations, and sorting records

  • Apache whirr: A class library that runs on cloud services, including Hadoop, to provide a high degree of complementarity. WHIRR supports the services of Amazon EC2 and Rackspace.

  • Apache bigtop: A tool for packaging, distributing, and testing Hadoop and its surrounding ecosystems.

  • Apache hcatalog: Hadoop-based data table and storage management for central metadata and schema management, spanning Hadoop and RDBMS, and providing relational views with pig and hive.

  • Cloudera Hue: A web-based monitoring and management system that implements Web operations and management of Hdfs,mapreduce/yarn, HBase, Hive, Pig.


This article is from the "Xiao Mo" blog, please be sure to keep this source http://xiaofengmo.blog.51cto.com/10116365/1744192

Hadoop family (3)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.