You need to create the table first:
CREATE TABLE IF not EXISTS population (
State CHAR (2) is not NULL, and city VARCHAR is not NULL,
CONSTRAINT my_pk PRIMARY KEY (state, city));
Executed under the Phoenix directory
Hadoop Jar/home/phoenix-4.6.0-hbase-1.0-bin/phoenix-4.6.0-hbase-1.0-client.jar Org.apache.phoenix.mapreduce.csvbulkloadtool-t population-i/datas/us_population.csv
-t:tablename
-i:input file files must be on the HDFs file.
After querying the table data is empty.
Problem: ERROR MapReduce. Csvbulkloadtool:error wrong fs:file:/home/hadoop/tmp/partitions_101bd67a-ec2c-4808-bc9f-bf4cd6ea74b9, expected: hdfs://node11:9000 occurred submitting Csvbulkload
Parameter meaning:
-A,--array-delimiter <arg> array element delimiter (optional)
-C,--import-columns <arg> comma-separated List of columns to BES
Imported
-D,--delimiter <arg> Input delimiter, defaults to comma
-e,--Escape <arg> supply a custom escape character, default is
A backslash
-G,--ignore-errors Ignore input errors
-H,--help Show this Help and quit
-I,--input <arg> input CSV path (mandatory)
-it,--index-table <arg> Phoenix Index Table name when just loading
This particualar index table
-o,--output <arg> output path for temporary hfiles (optional)
-Q,--Quote <arg> supply A custom phrase delimiter, defaults
to double quote character
-S,--schema <arg> Phoenix schema name (optional)
-T,--table <arg> Phoenix table name (mandatory)
-Z,--Zookeeper <arg> Supply Zookeeper connection details
(optional)
Phoenix uses Csvbulkloadtool to bring data in batches and automatically create indexes