Hive Learning Notes and grammar

Source: Internet
Author: User
Http://zhangrenhua.com Blog has moved

1. Hive Structure

Hive is a data warehouse infrastructure built on Hadoop. It provides a range of tools that can be used for data extraction conversion loading (ETL), a mechanism that can store, query, and analyze large-scale data stored in Hadoop. Hive defines a simple class-SQL query language called QL, which allows users who are familiar with SQL to query data. At the same time, the language also allows developers to familiarize themselves with the development of custom Mapper and reducer for the built-in mapper and reducer of complex analytical work that cannot be done. 1.1HIVE Architecture

The structure of hive can be divided into the following sections:

User interface: Includes CLI, Client, WUI

Meta data storage. Usually stored in a relational database such as MySQL, Derby

Interpreter, compiler, optimizer, actuator

Hadoop: Use HDFS for storage and compute with MapReduce

There are three main user interfaces: Cli,client and WUI. One of the most common is when CLI,CLI starts, it initiates a hive copy at the same time. The client is the guest of hive, and the user connects to the Hive Server. When you start the client mode, you need to indicate the node where the hive server is located and start hive Server on that node. Wui is a browser that accesses Hive.

Hive stores metadata in the database, such as MySQL, Derby. The metadata in hive includes the name of the table, the columns and partitions of the table and its properties, the properties of the table (whether it is an external table, etc.), the directory where the table's data resides, and so on.

The interpreter, compiler, optimizer completes HQL query statements from lexical analysis, parsing, compiling, optimization, and query plan generation. The generated query plan is stored in HDFS and subsequently executed by a mapreduce call.

Hive data is stored in HDFs, and most queries are completed by MapReduce (queries that contain *, such as SELECT * from TBL, do not generate mapredcue tasks). 1.2Hive and Hadoop relationships

Hive is built on top of Hadoop,

The interpretation, optimization, and query plan of query statements in HQL is done by hive.

All the data is stored in Hadoop

Query plans are converted to MapReduce tasks, executed in Hadoop (some queries do not have Mr Tasks, such as: SELECT * from table)

Both Hadoop and hive are encoded using UTF-8.

similarities and differences between 1.3Hive and common relational databases

Hive

Rdbms

Query Language

HQL

Sql

Data storage

Hdfs

Raw Device or Local FS

Index

No

Yes

Perform

Mapreduce

Excutor

Execution delay

High

Low

Processing data size

Big

Small

Query Language. Because SQL is widely used in the Data Warehouse, the query Language HQL of class SQL is designed specifically for the characteristics of hive. Developers who are familiar with SQL development can easily use hive for development.

The location where the data is stored. Hive is built on top of Hadoop, and all hive data is stored in HDFs. The database can then store the data in a block device or on a local file system.

Data format. There is no specific data format defined in Hive, the data format can be specified by the user, and the user-defined data format requires three attributes: Column delimiter (usually space, "\ T", "\x001″"), line delimiter ("\ n") ) and how to read the file data (the default in hive is three file formats Textfile,sequencefile and rcfile) [Wang Li 1]. Because in the process of loading the data, there is no need to convert from the user data format to the data format defined by the hive, so hive does not make any modifications to the data itself during the loading process, but simply copies or moves the contents of the data into the appropriate HDFS directory. In the database, different databases have different storage engines and define their own data formats. All data is stored in a certain organization, so the process of loading data in a database can be time-consuming.

Data updates. Because Hive is designed for data warehouse applications, the content of the Data warehouse is much less read and write. Therefore, overwriting and adding data is not supported in hive, and all data is determined when loaded.  The data in the database is often modified, so you can use INSERT into ... Values add data, use update ... SET to modify the data.

Index. As has been said before, Hive does not do any processing of the data during the loading of the data, or even scans the data, and therefore does not index some of the keys in the data. When hive accesses a specific value in the data that satisfies a condition, it requires brute-force scanning of the entire data, so the access latency is high. Because of the introduction of MapReduce, hive can access the data in parallel, so even without an index, hive can still demonstrate its advantage in accessing large amounts of data. Database, it is usually indexed for one or several columns, so the database can be highly efficient and low latency for data access to a small number of specific conditions. Because of the high latency of data access, it is decided that hive is not suitable for online data query.

Perform. The execution of most queries in Hive is done through the MapReduce provided by Hadoop (queries like select * from TBL do not require MapReduce). The database usually has its own execution engine.

Execution delay. As mentioned before, Hive, when querying data, needs to scan the entire table because there is no index, so the delay is high. Another factor that causes a high latency in hive execution is the MapReduce framework. Because MapReduce itself has a high latency, there is also a high latency when executing a hive query with MapReduce. In contrast, the database execution latency is low. Of course, this low is conditional, that is, the data size is small, when the data is large enough to exceed the processing capacity of the database, hive's parallel computing obviously can show the advantages.

Scalability. Because Hive is built on top of Hadoop, the scalability of hive is consistent with the scalability of Hadoop (the world's largest Hadoop cluster is around 4000 nodes in the yahoo!,2009 year). However, due to the strict limitation of acid semantics, the database is very limited in extension lines. At present, the most advanced parallel database Oracle has a theoretical expansion capacity of only about 100 units.

Data size. Because Hive is built on a cluster and can be used for parallel computing by MapReduce, it can support large-scale data, and correspondingly, the database can support a small amount of data. 1.4HIVE Metadata Database

Hive stores metadata in an RDBMS, often with MySQL and Derby. 1.4.1 DERBY

Start Hive's metabase

Go to Hive's installation directory

Eg:

1. Start the Derby database

/home/admin/caona/hive/build/dist/

Run Startnetworkserver-h 0.0.0.0

2. Connect Derby Database for testing

View/home/admin/caona/hive/build/dist/conf/hive-default.xml.

Find <property>

<name>javax.jdo.option.ConnectionURL</name>

<value>jdbc:derby://hadoop1:1527/metastore_db;create=true</value>

<DESCRIPTION>JDBC connect string for a jdbcmetastore</description>

</property>

Enter the Derby installation directory

/home/admin/caona/hive/build/dist/db-derby-10.4.1.3-bin/bin

Input./ij

Connect ' jdbc:derby://hadoop1:1527/metastore_db;create=true ';

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.