In the process of hive integration MySQL as metadata, after all the installation configuration work, go into hive mode, execute show databases, execute normally, then execute show tables, but error.
The key error messages are as follows:
Com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException:Specified key was too long; Max key length is 767 bytes)
The specific operation information is as follows:
hive> show databases;
Ok
Default
Time taken:8.638 seconds
Hive> Show tables;
Failed:error in Metadata:metaexception (Message:got exception:org.apache.hadoop.hive.metastore.api.MetaException Javax.jdo.JDODataStoreException:An exception was thrown and Adding/validating class (es): Specified key was too long; Max key length is 767 bytes
Com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException:Specified key was too long; Max key length is 767 bytes
At Sun.reflect.NativeConstructorAccessorImpl.newInstance0 (Native Method)
At Sun.reflect.NativeConstructorAccessorImpl.newInstance (nativeconstructoraccessorimpl.java:57)
At Sun.reflect.DelegatingConstructorAccessorImpl.newInstance (delegatingconstructoraccessorimpl.java:45)
At Java.lang.reflect.Constructor.newInstance (constructor.java:526)
At Com.mysql.jdbc.Util.handleNewInstance (util.java:411)
At Com.mysql.jdbc.Util.getInstance (util.java:386)
At Com.mysql.jdbc.SQLError.createSQLException (sqlerror.java:1052)
At Com.mysql.jdbc.MysqlIO.checkErrorPacket (mysqlio.java:4098)
At Com.mysql.jdbc.MysqlIO.checkErrorPacket (mysqlio.java:4030)
At Com.mysql.jdbc.MysqlIO.sendCommand (mysqlio.java:2490)
At Com.mysql.jdbc.MysqlIO.sqlQueryDirect (mysqlio.java:2651)
At Com.mysql.jdbc.ConnectionImpl.execSQL (connectionimpl.java:2671)
At Com.mysql.jdbc.ConnectionImpl.execSQL (connectionimpl.java:2621)
At Com.mysql.jdbc.StatementImpl.execute (statementimpl.java:842)
At Com.mysql.jdbc.StatementImpl.execute (statementimpl.java:681)
At Org.apache.commons.dbcp.DelegatingStatement.execute (delegatingstatement.java:264)
At Org.apache.commons.dbcp.DelegatingStatement.execute (delegatingstatement.java:264)
At Org.datanucleus.store.rdbms.table.AbstractTable.executeDdlStatement (abstracttable.java:730)
At Org.datanucleus.store.rdbms.table.AbstractTable.executeDdlStatementList (abstracttable.java:681)
At Org.datanucleus.store.rdbms.table.AbstractTable.create (abstracttable.java:402)
At Org.datanucleus.store.rdbms.table.AbstractTable.exists (abstracttable.java:458)
At Org.datanucleus.store.rdbms.rdbmsstoremanager$classadder.performtablesvalidation (RDBMSStoreManager.java:2689 )
At Org.datanucleus.store.rdbms.rdbmsstoremanager$classadder.addclasstablesandvalidate (RDBMSStoreManager.java : 2503)
At Org.datanucleus.store.rdbms.rdbmsstoremanager$classadder.run (rdbmsstoremanager.java:2148)
At Org.datanucleus.store.rdbms.AbstractSchemaTransaction.execute (abstractschematransaction.java:113)
At Org.datanucleus.store.rdbms.RDBMSStoreManager.addClasses (rdbmsstoremanager.java:986)
At Org.datanucleus.store.rdbms.RDBMSStoreManager.addClasses (rdbmsstoremanager.java:952)
At Org.datanucleus.store.AbstractStoreManager.addClass (abstractstoremanager.java:919)
At Org.datanucleus.store.mapped.MappedStoreManager.getDatastoreClass (mappedstoremanager.java:356)
At Org.datanucleus.store.rdbms.query.legacy.ExtentHelper.getExtent (extenthelper.java:48)
At Org.datanucleus.store.rdbms.RDBMSStoreManager.getExtent (rdbmsstoremanager.java:1332)
At Org.datanucleus.ObjectManagerImpl.getExtent (objectmanagerimpl.java:4149)
At Org.datanucleus.store.rdbms.query.legacy.JDOQLQueryCompiler.compileCandidates (jdoqlquerycompiler.java:411)
At Org.datanucleus.store.rdbms.query.legacy.QueryCompiler.executionCompile (querycompiler.java:312)
At Org.datanucleus.store.rdbms.query.legacy.JDOQLQueryCompiler.compile (jdoqlquerycompiler.java:225)
At Org.datanucleus.store.rdbms.query.legacy.JDOQLQuery.compileInternal (jdoqlquery.java:175)
At Org.datanucleus.store.query.Query.executeQuery (query.java:1628)
At Org.datanucleus.store.rdbms.query.legacy.JDOQLQuery.executeQuery (jdoqlquery.java:245)
At Org.datanucleus.store.query.Query.executeWithArray (query.java:1499)
At Org.datanucleus.jdo.JDOQuery.execute (jdoquery.java:243)
At Org.apache.hadoop.hive.metastore.ObjectStore.getTables (objectstore.java:781)
At Sun.reflect.NativeMethodAccessorImpl.invoke0 (Native Method)
At Sun.reflect.NativeMethodAccessorImpl.invoke (nativemethodaccessorimpl.java:57)
At Sun.reflect.DelegatingMethodAccessorImpl.invoke (delegatingmethodaccessorimpl.java:43)
At Java.lang.reflect.Method.invoke (method.java:606)
At Org.apache.hadoop.hive.metastore.RetryingRawStore.invoke (retryingrawstore.java:111)
At Com.sun.proxy. $Proxy 4.getTables (Unknown Source)
At Org.apache.hadoop.hive.metastore.hivemetastore$hmshandler.get_tables (hivemetastore.java:2327)
At Sun.reflect.NativeMethodAccessorImpl.invoke0 (Native Method)
At Sun.reflect.NativeMethodAccessorImpl.invoke (nativemethodaccessorimpl.java:57)
At Sun.reflect.DelegatingMethodAccessorImpl.invoke (delegatingmethodaccessorimpl.java:43)
At Java.lang.reflect.Method.invoke (method.java:606)
At Org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke (retryinghmshandler.java:105)
At Com.sun.proxy. $Proxy 5.get_tables (Unknown Source)
At Org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getTables (hivemetastoreclient.java:817)
At Sun.reflect.NativeMethodAccessorImpl.invoke0 (Native Method)
At Sun.reflect.NativeMethodAccessorImpl.invoke (nativemethodaccessorimpl.java:57)
At Sun.reflect.DelegatingMethodAccessorImpl.invoke (delegatingmethodaccessorimpl.java:43)
At Java.lang.reflect.Method.invoke (method.java:606)
At Org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke (retryingmetastoreclient.java:74)
At Com.sun.proxy. $Proxy 6.getTables (Unknown Source)
At Org.apache.hadoop.hive.ql.metadata.Hive.getTablesByPattern (hive.java:1009)
At Org.apache.hadoop.hive.ql.metadata.Hive.getAllTables (hive.java:983)
At Org.apache.hadoop.hive.ql.exec.DDLTask.showTables (ddltask.java:2215)
At Org.apache.hadoop.hive.ql.exec.DDLTask.execute (ddltask.java:334)
At Org.apache.hadoop.hive.ql.exec.Task.executeTask (task.java:138)
At Org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential (taskrunner.java:57)
At Org.apache.hadoop.hive.ql.Driver.launchTask (driver.java:1336)
At Org.apache.hadoop.hive.ql.Driver.execute (driver.java:1122)
At Org.apache.hadoop.hive.ql.Driver.run (driver.java:935)
At ORG.APACHE.HADOOP.HIVE.CLI.CLIDRIVER.PROCESSLOCALCMD (clidriver.java:259)
At ORG.APACHE.HADOOP.HIVE.CLI.CLIDRIVER.PROCESSCMD (clidriver.java:216)
At Org.apache.hadoop.hive.cli.CliDriver.processLine (clidriver.java:412)
At Org.apache.hadoop.hive.cli.CliDriver.run (clidriver.java:755)
At Org.apache.hadoop.hive.cli.CliDriver.main (clidriver.java:613)
At Sun.reflect.NativeMethodAccessorImpl.invoke0 (Native Method)
At Sun.reflect.NativeMethodAccessorImpl.invoke (nativemethodaccessorimpl.java:57)
At Sun.reflect.DelegatingMethodAccessorImpl.invoke (delegatingmethodaccessorimpl.java:43)
At Java.lang.reflect.Method.invoke (method.java:606)
At Org.apache.hadoop.util.RunJar.main (runjar.java:160)
Nestedthrowables:
Com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException:Specified key was too long; Max key length is 767 bytes)
Failed:execution Error, return code 1 from Org.apache.hadoop.hive.ql.exec.DDLTask
Treatment scenarios:
Modify the coded character set of the hive metabase that we created , such as:
ALTER DATABASE hive_test character set latin1;
Copyright NOTICE: This article for Bo Master original article, without Bo Master permission not reproduced.
Hive integrated MySQL as metadata, tip error: Specified key was too long; Max key length is 767 bytes