Installing hive and installing MySQL

Source: Internet
Author: User
Tags hadoop fs

First check the MySQL version of the system installation, using the command Rpm-qa | grep MySQL

To install your own version, perform the delete operation first RPM-E xxxxx.rpm--nodeps

then install your own version, Rpm-i xxxxx.rpm

To connect to MySQL remotely, proceed as follows:

1, into the MySQL environment mysql-uroot-proot

2, set up MySQL, authorized hive can remotely connect MySQL database grant all on hive.* to ' root ' @ '% ' identified by ' root ';

The meaning of the above sentence is that authorizing all the root users in any location matching the hive table, identified by ' Root ', is the password of the previous root user, and my root user password is also root.

3, after the setup is complete, refresh the flush privileges;

Two steps to create a table, first create a table, and then load the data to the table.

Create a table for creating table T1 (id int);

Hive's tables in Hadoop are actually stored as file systems, so you can add data to the T1 table in two ways:

Mode one, use the command LOAD DATA LOCAL inpath '/usr/local/id ' into TABLE T1;

Mode two, direct use of the command Hadoop FS-PUT/USR/LOCAL/ID/HIVE/T1

The above ID file is on Hadoop locally.

Create a table of multiple fields T2

CREATE TABLE T2 (ID int,name string) from the FORMAT delimited fields TERMINATED by ' \ t ';

Then create a Stu file in Hadoop local, such as the/usr/local/directory, which contains two fields, separated by a tab

Then add to the day T2 in Hadoop fs-put/usr/local/stu/hive/t2

Querying in Hive, except for the SELECT * FROM statement, all the other statements go to MapReduce because the select * from is a full-fledged scan.

Create a partitioned table

For example, to store table T3 by day,

CREATE TABLE T3 (id int) partitioned by (day int); The meaning of this sentence is that the installation days to store the table T3

LOAD DATA LOCAL inpath '/root/id ' into TABLE T3 PARTITION (day= ' 22 ');

LOAD DATA LOCAL inpath '/root/id ' into TABLE T3 PARTITION (day= ' 23 ');

Executing the above statement creates 22 and 232 folders in the Hive table (corresponding to the Hive folder in Hadoop), with T3 under each folder.

Then the query can use the select * from T3 where day=22;

Bucket table

A bucket table is a hash of the data (the number of buckets is modeled with a hash value), which is then stored in a different file

CREATE table t4 (ID int) clustered by (ID) into 4 buckets; Assign to 4 buckets according to the hash value of the ID column in the table

Set hive.enforce.bucketing = true;

Insert INTO table T4 Select id from T3; Assigns data in T3 to each bucket in T4 based on the hash value of the ID

Note: Each bucket in the T4 corresponds to a file, which stores the data allocated from the T3 table.

Partition tables are partitioned using files, and bucket tables are partitioned using files.

external tables;

The table above is the internal table.

External tables are stored outside the hive table, with commands

Create external table t5 (id int) location '/external ';

You can create an external table T5, which is just a link in hive that removes T5 from hive, just deletes the link, and the contents of the table exist externally.

Java Client Operation Hive:

Start the Hive remote service first

That is, execute command in terminal hive--service hiveserver >/dev/null 2>/dev/null &

Then add the jar package in the Lib directory of Hive to the eclipse

  1. package hive;
  2. import java.sql.Connection;
  3. import java.sql.DriverManager;
  4. import java.sql.ResultSet;
  5. import java.sql.Statement;
  6. public class App {
  7. public static void main(String[] args) throws Exception{
  8. Class.forName("org.apache.hadoop.hive.jdbc.HiveDriver");
  9. //链接default的数据库
  10. Connection con = DriverManager.getConnection("jdbc:hive://192.168.56.100:10000/default", "", "");
  11. Statement stmt = con.createStatement();
  12. //查询default数据库中的t1表
  13. String querySQL="SELECT * FROM default.t1";
  14. ResultSet res = stmt.executeQuery(querySQL);
  15. while (res.next()) {
  16. System.out.println(res.getInt(1));//注意数据库中列是1开始的
  17. }
  18. }
  19. }
Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.