Environment:Hadoop2.2.0hbase0.96sqoop-1.4.4.bin__hadoop-2.0.4-alpha.tar.gzoracle11gjdk1.7ubuntu14 Server here about the environment Spit groove Sentence: The latest version of the Sqoop1.99.3 function is too weak, only support import data to HDFs, no other option, too dirt! (If you have a different opinion, please discuss the solution.)
command:Sqoop import--connect jdbc:oracle:thin:@192.168.0.147:1521:orclgbk--username zhaobiao--p--table CMS_NEWS_0625-- Hbase-create-table--hbase-table 147patents--column-family patentinfo
Note the points:The table name of 1.Oracle must be capitalized 2. The user name must be in uppercase 3.originally intended to use the following parameters to create a composite row key--hbase-row-key Create_time,publish_time,operate_time,title
But always error: Error:java.io.IOException:Could not inserts row with null value for Row-key column:operate_time &NBSP ; at Org.apache.sqoop.hbase.ToStringPutTransformer.getPutCommand (tostringputtransformer.java:125) at org.apache.sqoop.hbase.HBasePutProcessor.accept (hbaseputprocessor.java:142) at Org.apache.sqoop.mapreduce.delegatingoutputformat$delegatingrecordwriter.write ( delegatingoutputformat.java:128) at org.apache.sqoop.mapreduce.delegatingoutputformat$ Delegatingrecordwriter.write (delegatingoutputformat.java:92) at Org.apache.hadoop.mapred.maptask$newdirectoutputcollector.write (maptask.java:634) at Org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write (taskinputoutputcontextimpl.java:89) at Org.apache.hadoop.mapreduce.lib.map.wrappedmapper$context.write (wrappedmapper.java:112) &NBsp At Org.apache.sqoop.mapreduce.HBaseImportMapper.map (hbaseimportmapper.java:38) at Org.apache.sqoop.mapreduce.HBaseImportMapper.map (hbaseimportmapper.java:31) at Org.apache.hadoop.mapreduce.Mapper.run (mapper.java:145) at Org.apache.sqoop.mapreduce.AutoProgressMapper.run (autoprogressmapper.java:64) at Org.apache.hadoop.mapred.MapTask.runNewMapper (maptask.java:763) at Org.apache.hadoop.mapred.MapTask.run (maptask.java:339) at Org.apache.hadoop.mapred.yarnchild$2.run (yarnchild.java:162) at Java.security.AccessController.doPrivileged (Native Method) at Javax.security.auth.Subject.doAs (subject.java:415) at Org.apache.hadoop.security.UserGroupInformation.doAs (usergroupinformation.java:1491) at Org.apache.hadoop.mapred.YarnChIld.main (yarnchild.java:157)
However
after the parameter is removed, it executes normally, and the row key is the primary key ID of the original table.
this issue is pending!