Hive provides two data import methods.
1. import from the table:
Insert overwrite table test
Select * from test2;
2 import from file:
2.1 import from a local file:
Load data local inpath '/Hadoop/aa.txt' overwrite into table test11
2.2 import from hdfs
Load data inpath '/hadoop/aa.txt' overwrite into table test11
3. Column division of import files
When creating a table, you can specify the following characters:
Create table test11 (id int, name string)
Row format delimited
Fields terminated by '\;' divides file columns with semicolons. The imported data file is in the format of 1; John.
4 to output data
Generally used: bin/hive-e "select * from test"> res.csv
Or: bin/hive-f SQL. q> res.csv (where file SQL. q is written into the query statement you want to execute)
5 Chinese garbled characters
The default character encoding of hive is in utf8 format, so the data stored in utf8 mode can be normally displayed. If you use an ssh client tool to view the data, it is still garbled, this may be determined by the encoding method of your client tool. Take secureCRT as an example to modify session options-apperance (this will not affect the encoding method of other sessions ), of course, for linux systems, you also need to make simple utf8 support settings (this is generally supported)