1: Configure Hive-site.xml First
<Configuration> < Property> <name>Javax.jdo.option.ConnectionURL</name> <value>Jdbc:postgresql://192.168.56.103:5432/sparksql</value> </ Property> < Property> <name>Javax.jdo.option.ConnectionDriverName</name> <value>Org.postgresql.Driver</value> </ Property> < Property> <name>Javax.jdo.option.ConnectionUserName</name> <value>Postgres</value> </ Property> < Property> <name>Javax.jdo.option.ConnectionPassword</name> <value>Gaoxing</value> </ Property></Configuration>
2: Configure the JDBC jar path for PostgreSQL
Configuring in Spark-default.properties
Spark.driver.extraclasspath=/opt/spark/lib/postgresql-9.4.jar
Question 1
Start Thriftserver, 10000 listening port is not open. Rename the Hive-site.xml and use the default Derby database to start
Find answers on the Web:
The original hive automatically created PostgreSQL's representation, PostgreSQL will automatically lock the dead, your sister, so stupid.
You need to extract the PostgreSQL SQL statement from the source of hive to automatically create
Https://github.com/apache/hive/blob/master/metastore/scripts/upgrade/postgres/hive-schema-1.2.0.postgres.sql
Question 2
The name of the created table is uppercase, the dead can not query, asked the next PG DBA said is the problem of the pattern.
CREATE TABLE "CDS" ( "cd_id" bigint not NULL);
PG with quotation marks is case sensitive, elder brother, not according to the routine out of the card AH
spark1.6 configuring Sparksql metadata to be stored in PostgreSQL