1. Installation and Configuration
You can install a stable version of hive by downloading a compressed package, or you can download the source code to compile it.
1.1 running HiveServer2 and Beeline
1.2 Requirements
java1.7+, official website recommended 1.8 hadoop2.x
1.3 Installing a stable version of Hive
Download the current stable version of http://mirrors.cnnic.cn/apache/hive/hive-2.1.0/. Unzip, rename TAR-ZXVF hive-2.1.0-bin.tar.gz mv Apache-hive-2.1.0-bin hive-2.1.0 Set the value of the/etc/profile variable in ~/.BASHRC or Hive_home , which points to the installation directory for HIVE: Export hive_home=/opt/hive/hive-2.1.0 export path= $PATH: $HIVE _home/bin: $HIVE _home/conf make the configuration effective immediately: Source/etc/profile Modifying the Hive configuration document (using the default document without modification), two ways to install it: using the Derby database, using a different database (such as MySQL): How to install the Derby database Apache Derby is a database written entirely in Java, so it can be cross-platform, but it needs to run the database in the JVM, where metadata is stored in the Derby database, and metadata information such as the schema and partition information of the table is stored.
It is also the default way to install hive. Create hive-env.sh file CP hive-env.sh.template hive-env.sh added in hive-env.sh: # Set Hadoop_home to point to a specific HADOOP Insta ll directory Export hadoop_home=/opt/hadoop/hadoop-2.7.2 # Hive Configuration directory can be controlled by export Hive_c onf_dir=/opt/hive/hive-2.1.0/conf creating Hive-site.xml cd/opt/hive/hive-2.1.0/conf CP hive-default.xml.template Hive-site.xml configuration hive-site.xml File: Parameter hive-metastore.warehourse.dir Specifies the storage directory where the data in the hive table is stored, specifying the location on the HDFS, the default value is/user/hive/ Warehouse (as if there was no parameter, I added) parameter hive.exec.scratchdir specifies the data temp file directory for hive, the default location is/tmp/hive-${user.name}. This is configured as/tmp/hive connection database configuration instructions: Parameter Javax.jdo.option.ConnectionURL jdbc:derby:;d atabasename=metastore_db;create=true Parameter javax.jdo.option.ConnectionDriverName Specifies the driver's class entry name: Org.apache.derby.jdbc.EmbeddedDriver Parameter javax.jdo.option.ConnectionUserName the user name of the specified database: APP parameter Javax.jdo.option.ConnectionPassword password for the specified database: MIME Derby database driver under directory $hive_home/lib/, Derby-10.10.2.0.jar
<property>
<name> javax.jdo.option.ConnectionURL </ name>
<value> jdbc: derby:; databaseName = metastore_db; create = true </ value>
<description>
Tell Hive how to connect to the metastore server. By default, the current directory is used as the databaseName part of the attribute value string. This will cause Derby to forget the metadata storage information in the previous directory whenever the user changes the working directory. We can use databaseName = / opt / hive / hive-2.1.0 / metastore_db as the absolute path, that is, the path where the metastore_db directory is located. This setting can solve the problem that Hive automatically deletes the metastore_db directory under the working directory each time a new hive session is opened The problem. Now we can access all the metadata no matter which directory we work in.
</ description>
</ property>
<property>
<name> javax.jdo.option.ConnectionDriverName </ name>
<value> org.apache.derby.jdbc.EmbeddedDriver </ value>
<description> Driver class name for a JDBC metastore </ description>
</ property>
<property>
<name> javax.jdo.option.ConnectionUserName </ name>
<value> APP </ value>
<description> Username to use against metastore database </ description>
</ property>
<property>
<name> javax.jdo.option.ConnectionPassword </ name>
<value> mine </ value>
<description> password to use against metastore database </ description>
</ property>
Start hive command line type hive display error: caused by:org.apache.hadoop.hive.ql.metadata.HiveException: Org.apache.hadoop.hive.ql.metadata.HiveException:MetaException (message:hive Metastore database is not initialized. Please use Schematool (e.g./schematool-initschema-dbtype ...) to create the schema. If needed, don ' t forget to include the option to auto-create the underlying database in your JDBC connection string (e.g. Createdatabaseifnotexist=true for MySQL) based on the error, it should be the reason why the Metastore database is not initialized. Follow the prompts to execute the command "Schematool-initschema-dbtype Derby" to initialize the Metastore. However, errors may still be reported at this time: error:function ' nucleus_ascii ' already exists. (state=x0y68,code=30000) This is because the metastore_db folder has been generated before. You need to delete the metastore_db folder under the current working directory, Rm-r metastore_db and rerun the Schematool-initschema-dbtype Derby command. After executing hive, there is an error: java.lang.IllegalArgumentException:java.net.URISyntaxException:Relative path in absolute uri:${ System:java.io.tmpdir%7d/$%7bsystem:user.name%7d WORKAROUND: Create a TMP folder for temporary IO in the Hive installation directory: mkdir Iotmp Then, Replace the absolute path of this iotmp with the ${in the Hive-site.xml fileSystem:java.io.tmpdir}. Execute hive again to start successfully. Before hive builds the table, create the/TMP and/user/hive/warehouse file directories (under the default configuration document) and set them in HDFs chmod g+w (
This step can not be, will automatically create a new) $ $HADOOP _home/bin/hadoop fs-mkdir/tmp/hive $ $HADOOP _home/bin/hadoop fs-mkdir/user/hive/warehouse $ $HADOOP _home/bi N/hadoop fs-chmod g+w/tmp/hive $ $HADOOP _home/bin/hadoop fs-chmod g+w/user/hive/warehouse test: Build test table Test:create table T EST (key string); Show tables; Use MySQL database to store metadata using MySQL database, storing metadata information such as table's schema and partition information
。 Ubuntu uses the sudo apt-get install mysql-server installation to build a database hive:create The hive user, and authorizes: Grant all on hive.* to hive@ '% ' Identified by ' Hive '; The first hive is a database, the second is a user name, and the third is a password. Forced write: Flush privileges; Create hive-env.sh file CP hive-env.sh.template hive-env.sh added in hive-env.sh: # Set Hadoop_home to point to a specific HADOOP Insta ll directory Export hadoop_home=/opt/hadoop/hadoop-2.7.2 # Hive Configuration directory can be controlled by export Hive_c onf_dir=/opt/hive/hive-2.1.0/conf creating Hive-site.xml cd/opt/hive/hive-2.1.0/conf CP hive-default.xml.template Hive-site.xml configuration hive-site.xml File: Parameter hive-metastore.warehourse.dir specifies the data store directory for hive, specifying the location on the HDFS, the default value is/user/hive/ Warehouse (as if there was no parameter, I added) parameter hive.exec.scratchdir specifies the data temp file directory for hive, the default location is/tmp/hive-${user.name}. This is configured to/tmp/hive modify Hive-site.xml
<property>
<name>javax.jdo.option.ConnectionURL </name>
<value>jdbc:mysql://localhost:3306/hive</value>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName </name>
<value>com.mysql.jdbc.Driver </value>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>hive</value>
<description>username to use against metastore database</description>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value><span style="color:#ff0000;">hive</span></value>
<description>password to use against metastore database</description>
</property>
<property>
<name>hive.hwi.listen.port </name>
<value>9999 </value>
<description>This is the port the Hive Web Interface will listen on </descript ion>
</property>
<property>
<name>datanucleus.autoCreateSchema </name>
<value>false </value>
</property>
<property>
<name>datanucleus.fixedDatastore </name>
<value>true </value>
</property>
<property>
<name>hive.metastore.local </name>
<value>true </value>
<description>controls whether to connect to remove metastore server or open a new metastore server in Hive Client JVM </description>
</property>
An error occurred running Hive: caused by:metaexception (message:hive Metastore database is not initialized. Please use Schematool (e.g./schematool-initschema-dbtype ...) to create the schema. If needed, don ' t forget to include the option to auto-create the underlying database in your JDBC connection string (e.g. Createdatabaseifnotexist=true for MySQL) and then I execute $hive_home/bin/schematool-initschema-dbtype. DerbyError: Error:you has an error in your SQL syntax; Check the manual, corresponds to your MySQL server version for the right syntax to us e near ' APP '. Nucleus_ascii "(C CHAR (1)) RETURNS INTEGER LANGUAGE JAVA PARAMETER STYLE ' at line 1 (state=42000,code=1064) I found out that it was used during initialization Derby, this should be MySQL, so just execute $hive_home/bin/schematool-initschema-dbtype MySQL to execute HIVE, success.
1.4 Problems encountered:
(1) Hive Metastore database is not initialized. Workaround: Follow the prompts to execute the command "Schematool-initschema-dbtype Derby" to initialize the Metastore. However, errors may still be reported at this time: error:function ' nucleus_ascii ' already exists. (state=x0y68,code=30000). This is because the metastore_db folder has been generated before. You need to delete the metastore_db folder under the current working directory and rerun the Schematool-initschema-dbtype Derby command. Hive can then start normally with the native inline derby database for Metastore.
(2) caused by:org.datanucleus.store.rdbms.connectionpool.DatastoreDriverNotFoundException:The specified datastore Driver ("Com.mysql.jdbc.Driver") was not foundin the CLASSPATH. No mysql-connector-java-*.*.*.tar.gz download mysql-connector-java-5.1.39.tar.gz http://cdn.mysql.com//Downloads/ Connector-j/mysql-connector-java-5.1.39.tar.gz the Mysql-connector-java-5.1.39-bin.jar to $hive_home/lib after decompression.